00:00:00.001 Started by upstream project "autotest-per-patch" build number 130844 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.034 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:02.920 The recommended git tool is: git 00:00:02.920 using credential 00000000-0000-0000-0000-000000000002 00:00:02.923 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.940 Fetching changes from the remote Git repository 00:00:02.944 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.960 Using shallow fetch with depth 1 00:00:02.960 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.960 > git --version # timeout=10 00:00:02.975 > git --version # 'git version 2.39.2' 00:00:02.975 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.992 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.992 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.339 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.355 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.370 Checking out Revision 1913354106d3abc3c9aeb027a32277f58731b4dc (FETCH_HEAD) 00:00:09.370 > git config core.sparsecheckout # timeout=10 00:00:09.387 > git read-tree -mu HEAD # timeout=10 00:00:09.408 > git checkout -f 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=5 00:00:09.444 Commit message: "jenkins: update jenkins to 2.462.2 and update plugins to its latest versions" 00:00:09.445 > git rev-list --no-walk 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=10 00:00:09.558 [Pipeline] Start of Pipeline 00:00:09.572 [Pipeline] library 00:00:09.574 Loading library shm_lib@master 00:00:09.574 Library shm_lib@master is cached. Copying from home. 00:00:09.588 [Pipeline] node 00:00:09.601 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.603 [Pipeline] { 00:00:09.614 [Pipeline] catchError 00:00:09.616 [Pipeline] { 00:00:09.628 [Pipeline] wrap 00:00:09.637 [Pipeline] { 00:00:09.644 [Pipeline] stage 00:00:09.646 [Pipeline] { (Prologue) 00:00:09.871 [Pipeline] sh 00:00:10.183 + logger -p user.info -t JENKINS-CI 00:00:10.204 [Pipeline] echo 00:00:10.206 Node: CYP11 00:00:10.215 [Pipeline] sh 00:00:10.524 [Pipeline] setCustomBuildProperty 00:00:10.537 [Pipeline] echo 00:00:10.538 Cleanup processes 00:00:10.544 [Pipeline] sh 00:00:10.832 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.832 3033929 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.848 [Pipeline] sh 00:00:11.134 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.134 ++ grep -v 'sudo pgrep' 00:00:11.134 ++ awk '{print $1}' 00:00:11.134 + sudo kill -9 00:00:11.134 + true 00:00:11.151 [Pipeline] cleanWs 00:00:11.161 [WS-CLEANUP] Deleting project workspace... 00:00:11.161 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.195 [WS-CLEANUP] done 00:00:11.199 [Pipeline] setCustomBuildProperty 00:00:11.213 [Pipeline] sh 00:00:11.504 + sudo git config --global --replace-all safe.directory '*' 00:00:11.600 [Pipeline] httpRequest 00:00:12.188 [Pipeline] echo 00:00:12.191 Sorcerer 10.211.164.101 is alive 00:00:12.201 [Pipeline] retry 00:00:12.203 [Pipeline] { 00:00:12.217 [Pipeline] httpRequest 00:00:12.222 HttpMethod: GET 00:00:12.223 URL: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:12.224 Sending request to url: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:12.232 Response Code: HTTP/1.1 200 OK 00:00:12.233 Success: Status code 200 is in the accepted range: 200,404 00:00:12.233 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:20.774 [Pipeline] } 00:00:20.796 [Pipeline] // retry 00:00:20.805 [Pipeline] sh 00:00:21.094 + tar --no-same-owner -xf jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:21.114 [Pipeline] httpRequest 00:00:21.415 [Pipeline] echo 00:00:21.420 Sorcerer 10.211.164.101 is alive 00:00:21.441 [Pipeline] retry 00:00:21.448 [Pipeline] { 00:00:21.466 [Pipeline] httpRequest 00:00:21.479 HttpMethod: GET 00:00:21.480 URL: http://10.211.164.101/packages/spdk_70750b651cc13e3ec3582e9cc45a97f7a1da6059.tar.gz 00:00:21.480 Sending request to url: http://10.211.164.101/packages/spdk_70750b651cc13e3ec3582e9cc45a97f7a1da6059.tar.gz 00:00:21.487 Response Code: HTTP/1.1 200 OK 00:00:21.495 Success: Status code 200 is in the accepted range: 200,404 00:00:21.495 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70750b651cc13e3ec3582e9cc45a97f7a1da6059.tar.gz 00:03:06.377 [Pipeline] } 00:03:06.395 [Pipeline] // retry 00:03:06.403 [Pipeline] sh 00:03:06.696 + tar --no-same-owner -xf spdk_70750b651cc13e3ec3582e9cc45a97f7a1da6059.tar.gz 00:03:10.017 [Pipeline] sh 00:03:10.304 + git -C spdk log --oneline -n5 00:03:10.304 70750b651 test/common: Move nvme_namespace_revert() to nvme/functions.sh 00:03:10.304 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:03:10.304 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:03:10.304 82c46626a lib/event: implement scheduler trace events 00:03:10.304 fa6aec495 lib/thread: register thread owner type for scheduler trace events 00:03:10.316 [Pipeline] } 00:03:10.328 [Pipeline] // stage 00:03:10.336 [Pipeline] stage 00:03:10.338 [Pipeline] { (Prepare) 00:03:10.355 [Pipeline] writeFile 00:03:10.369 [Pipeline] sh 00:03:10.656 + logger -p user.info -t JENKINS-CI 00:03:10.670 [Pipeline] sh 00:03:10.954 + logger -p user.info -t JENKINS-CI 00:03:10.967 [Pipeline] sh 00:03:11.258 + cat autorun-spdk.conf 00:03:11.258 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:11.258 SPDK_TEST_NVMF=1 00:03:11.258 SPDK_TEST_NVME_CLI=1 00:03:11.258 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:11.258 SPDK_TEST_NVMF_NICS=e810 00:03:11.258 SPDK_TEST_VFIOUSER=1 00:03:11.258 SPDK_RUN_UBSAN=1 00:03:11.258 NET_TYPE=phy 00:03:11.266 RUN_NIGHTLY=0 00:03:11.269 [Pipeline] readFile 00:03:11.291 [Pipeline] withEnv 00:03:11.293 [Pipeline] { 00:03:11.306 [Pipeline] sh 00:03:11.599 + set -ex 00:03:11.599 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:11.599 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:11.599 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:11.599 ++ SPDK_TEST_NVMF=1 00:03:11.599 ++ SPDK_TEST_NVME_CLI=1 00:03:11.599 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:11.599 ++ SPDK_TEST_NVMF_NICS=e810 00:03:11.599 ++ SPDK_TEST_VFIOUSER=1 00:03:11.599 ++ SPDK_RUN_UBSAN=1 00:03:11.599 ++ NET_TYPE=phy 00:03:11.599 ++ RUN_NIGHTLY=0 00:03:11.599 + case $SPDK_TEST_NVMF_NICS in 00:03:11.599 + DRIVERS=ice 00:03:11.599 + [[ tcp == \r\d\m\a ]] 00:03:11.599 + [[ -n ice ]] 00:03:11.600 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:11.600 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:11.600 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:11.600 rmmod: ERROR: Module irdma is not currently loaded 00:03:11.600 rmmod: ERROR: Module i40iw is not currently loaded 00:03:11.600 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:11.600 + true 00:03:11.600 + for D in $DRIVERS 00:03:11.600 + sudo modprobe ice 00:03:11.600 + exit 0 00:03:11.610 [Pipeline] } 00:03:11.625 [Pipeline] // withEnv 00:03:11.630 [Pipeline] } 00:03:11.646 [Pipeline] // stage 00:03:11.657 [Pipeline] catchError 00:03:11.659 [Pipeline] { 00:03:11.672 [Pipeline] timeout 00:03:11.672 Timeout set to expire in 1 hr 0 min 00:03:11.674 [Pipeline] { 00:03:11.689 [Pipeline] stage 00:03:11.692 [Pipeline] { (Tests) 00:03:11.707 [Pipeline] sh 00:03:11.997 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:11.997 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:11.997 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:11.997 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:11.997 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.997 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:11.997 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:11.997 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:11.997 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:11.997 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:11.997 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:11.997 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:11.997 + source /etc/os-release 00:03:11.997 ++ NAME='Fedora Linux' 00:03:11.997 ++ VERSION='39 (Cloud Edition)' 00:03:11.997 ++ ID=fedora 00:03:11.997 ++ VERSION_ID=39 00:03:11.997 ++ VERSION_CODENAME= 00:03:11.997 ++ PLATFORM_ID=platform:f39 00:03:11.997 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:11.997 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:11.997 ++ LOGO=fedora-logo-icon 00:03:11.997 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:11.997 ++ HOME_URL=https://fedoraproject.org/ 00:03:11.997 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:11.997 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:11.997 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:11.997 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:11.997 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:11.997 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:11.997 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:11.997 ++ SUPPORT_END=2024-11-12 00:03:11.997 ++ VARIANT='Cloud Edition' 00:03:11.997 ++ VARIANT_ID=cloud 00:03:11.997 + uname -a 00:03:11.997 Linux spdk-cyp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:11.997 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:15.301 Hugepages 00:03:15.301 node hugesize free / total 00:03:15.301 node0 1048576kB 0 / 0 00:03:15.301 node0 2048kB 0 / 0 00:03:15.301 node1 1048576kB 0 / 0 00:03:15.301 node1 2048kB 0 / 0 00:03:15.301 00:03:15.301 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.301 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:15.301 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:15.301 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:15.301 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:15.301 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:15.301 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:15.301 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:15.301 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:15.301 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:15.301 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:15.301 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:15.301 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:15.301 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:15.301 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:15.301 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:15.301 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:15.302 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:15.302 + rm -f /tmp/spdk-ld-path 00:03:15.302 + source autorun-spdk.conf 00:03:15.302 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.302 ++ SPDK_TEST_NVMF=1 00:03:15.302 ++ SPDK_TEST_NVME_CLI=1 00:03:15.302 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:15.302 ++ SPDK_TEST_NVMF_NICS=e810 00:03:15.302 ++ SPDK_TEST_VFIOUSER=1 00:03:15.302 ++ SPDK_RUN_UBSAN=1 00:03:15.302 ++ NET_TYPE=phy 00:03:15.302 ++ RUN_NIGHTLY=0 00:03:15.302 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:15.302 + [[ -n '' ]] 00:03:15.302 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.302 + for M in /var/spdk/build-*-manifest.txt 00:03:15.302 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:15.302 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:15.302 + for M in /var/spdk/build-*-manifest.txt 00:03:15.302 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:15.302 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:15.302 + for M in /var/spdk/build-*-manifest.txt 00:03:15.302 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:15.302 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:15.302 ++ uname 00:03:15.302 + [[ Linux == \L\i\n\u\x ]] 00:03:15.302 + sudo dmesg -T 00:03:15.302 + sudo dmesg --clear 00:03:15.302 + dmesg_pid=3035576 00:03:15.302 + [[ Fedora Linux == FreeBSD ]] 00:03:15.302 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:15.302 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:15.302 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:15.302 + [[ -x /usr/src/fio-static/fio ]] 00:03:15.302 + export FIO_BIN=/usr/src/fio-static/fio 00:03:15.302 + FIO_BIN=/usr/src/fio-static/fio 00:03:15.302 + sudo dmesg -Tw 00:03:15.302 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:15.302 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:15.302 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:15.302 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:15.302 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:15.302 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:15.302 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:15.302 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:15.302 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:15.564 Test configuration: 00:03:15.564 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.564 SPDK_TEST_NVMF=1 00:03:15.564 SPDK_TEST_NVME_CLI=1 00:03:15.564 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:15.564 SPDK_TEST_NVMF_NICS=e810 00:03:15.564 SPDK_TEST_VFIOUSER=1 00:03:15.564 SPDK_RUN_UBSAN=1 00:03:15.564 NET_TYPE=phy 00:03:15.564 RUN_NIGHTLY=0 09:24:15 -- common/autotest_common.sh@1625 -- $ [[ n == y ]] 00:03:15.564 09:24:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:15.564 09:24:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:15.564 09:24:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:15.564 09:24:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:15.564 09:24:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:15.564 09:24:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.564 09:24:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.564 09:24:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.564 09:24:15 -- paths/export.sh@5 -- $ export PATH 00:03:15.564 09:24:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.564 09:24:15 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:15.564 09:24:15 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:15.564 09:24:15 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728285855.XXXXXX 00:03:15.564 09:24:15 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728285855.DksgCP 00:03:15.564 09:24:15 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:15.564 09:24:15 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:15.564 09:24:15 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:15.564 09:24:15 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:15.564 09:24:15 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:15.564 09:24:15 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:15.564 09:24:15 -- common/autotest_common.sh@410 -- $ xtrace_disable 00:03:15.564 09:24:15 -- common/autotest_common.sh@10 -- $ set +x 00:03:15.564 09:24:15 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:15.564 09:24:15 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:15.564 09:24:15 -- pm/common@17 -- $ local monitor 00:03:15.564 09:24:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.564 09:24:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.564 09:24:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.564 09:24:15 -- pm/common@21 -- $ date +%s 00:03:15.564 09:24:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.564 09:24:15 -- pm/common@25 -- $ sleep 1 00:03:15.564 09:24:15 -- pm/common@21 -- $ date +%s 00:03:15.564 09:24:15 -- pm/common@21 -- $ date +%s 00:03:15.564 09:24:15 -- pm/common@21 -- $ date +%s 00:03:15.564 09:24:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285855 00:03:15.564 09:24:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285855 00:03:15.564 09:24:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285855 00:03:15.564 09:24:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285855 00:03:15.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285855_collect-cpu-load.pm.log 00:03:15.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285855_collect-vmstat.pm.log 00:03:15.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285855_collect-cpu-temp.pm.log 00:03:15.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285855_collect-bmc-pm.bmc.pm.log 00:03:16.511 09:24:16 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:16.511 09:24:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:16.511 09:24:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:16.511 09:24:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:16.511 09:24:16 -- spdk/autobuild.sh@16 -- $ date -u 00:03:16.511 Mon Oct 7 07:24:16 AM UTC 2024 00:03:16.511 09:24:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:16.511 v25.01-pre-36-g70750b651 00:03:16.511 09:24:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:16.511 09:24:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:16.511 09:24:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:16.511 09:24:16 -- common/autotest_common.sh@1104 -- $ '[' 3 -le 1 ']' 00:03:16.511 09:24:16 -- common/autotest_common.sh@1110 -- $ xtrace_disable 00:03:16.511 09:24:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.772 ************************************ 00:03:16.772 START TEST ubsan 00:03:16.772 ************************************ 00:03:16.772 09:24:16 ubsan -- common/autotest_common.sh@1128 -- $ echo 'using ubsan' 00:03:16.772 using ubsan 00:03:16.772 00:03:16.772 real 0m0.001s 00:03:16.772 user 0m0.000s 00:03:16.772 sys 0m0.000s 00:03:16.772 09:24:16 ubsan -- common/autotest_common.sh@1129 -- $ xtrace_disable 00:03:16.772 09:24:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:16.772 ************************************ 00:03:16.772 END TEST ubsan 00:03:16.772 ************************************ 00:03:16.772 09:24:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:16.772 09:24:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:16.772 09:24:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:16.772 09:24:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:16.772 09:24:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:16.772 09:24:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:16.772 09:24:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:16.772 09:24:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:16.772 09:24:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:16.773 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:16.773 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:17.344 Using 'verbs' RDMA provider 00:03:33.200 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:45.436 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:46.010 Creating mk/config.mk...done. 00:03:46.010 Creating mk/cc.flags.mk...done. 00:03:46.010 Type 'make' to build. 00:03:46.010 09:24:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:03:46.010 09:24:45 -- common/autotest_common.sh@1104 -- $ '[' 3 -le 1 ']' 00:03:46.010 09:24:45 -- common/autotest_common.sh@1110 -- $ xtrace_disable 00:03:46.010 09:24:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:46.010 ************************************ 00:03:46.010 START TEST make 00:03:46.010 ************************************ 00:03:46.010 09:24:45 make -- common/autotest_common.sh@1128 -- $ make -j144 00:03:46.583 make[1]: Nothing to be done for 'all'. 00:03:47.966 The Meson build system 00:03:47.966 Version: 1.5.0 00:03:47.966 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:47.966 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:47.966 Build type: native build 00:03:47.966 Project name: libvfio-user 00:03:47.966 Project version: 0.0.1 00:03:47.966 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:47.966 C linker for the host machine: cc ld.bfd 2.40-14 00:03:47.966 Host machine cpu family: x86_64 00:03:47.966 Host machine cpu: x86_64 00:03:47.966 Run-time dependency threads found: YES 00:03:47.966 Library dl found: YES 00:03:47.966 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:47.966 Run-time dependency json-c found: YES 0.17 00:03:47.966 Run-time dependency cmocka found: YES 1.1.7 00:03:47.966 Program pytest-3 found: NO 00:03:47.966 Program flake8 found: NO 00:03:47.966 Program misspell-fixer found: NO 00:03:47.966 Program restructuredtext-lint found: NO 00:03:47.966 Program valgrind found: YES (/usr/bin/valgrind) 00:03:47.966 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:47.966 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:47.966 Compiler for C supports arguments -Wwrite-strings: YES 00:03:47.966 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:47.966 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:47.966 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:47.966 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:47.966 Build targets in project: 8 00:03:47.966 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:47.966 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:47.966 00:03:47.966 libvfio-user 0.0.1 00:03:47.966 00:03:47.966 User defined options 00:03:47.966 buildtype : debug 00:03:47.966 default_library: shared 00:03:47.966 libdir : /usr/local/lib 00:03:47.966 00:03:47.966 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:48.224 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:48.483 [1/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:48.483 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:48.483 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:48.483 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:48.483 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:48.483 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:48.483 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:48.483 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:48.483 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:48.483 [10/37] Compiling C object samples/null.p/null.c.o 00:03:48.483 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:48.483 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:48.483 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:48.483 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:48.483 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:48.483 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:48.483 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:48.483 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:48.483 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:48.483 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:48.483 [21/37] Compiling C object samples/server.p/server.c.o 00:03:48.483 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:48.483 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:48.483 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:48.483 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:48.483 [26/37] Compiling C object samples/client.p/client.c.o 00:03:48.483 [27/37] Linking target samples/client 00:03:48.483 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:48.483 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:48.742 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:48.742 [31/37] Linking target test/unit_tests 00:03:48.742 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:48.742 [33/37] Linking target samples/lspci 00:03:48.742 [34/37] Linking target samples/null 00:03:48.742 [35/37] Linking target samples/gpio-pci-idio-16 00:03:48.742 [36/37] Linking target samples/server 00:03:48.742 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:48.742 INFO: autodetecting backend as ninja 00:03:48.742 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:49.003 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:49.264 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:49.264 ninja: no work to do. 00:03:55.981 The Meson build system 00:03:55.981 Version: 1.5.0 00:03:55.981 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:55.981 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:55.981 Build type: native build 00:03:55.981 Program cat found: YES (/usr/bin/cat) 00:03:55.981 Project name: DPDK 00:03:55.981 Project version: 24.03.0 00:03:55.981 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:55.981 C linker for the host machine: cc ld.bfd 2.40-14 00:03:55.981 Host machine cpu family: x86_64 00:03:55.981 Host machine cpu: x86_64 00:03:55.981 Message: ## Building in Developer Mode ## 00:03:55.981 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:55.981 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:55.981 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:55.981 Program python3 found: YES (/usr/bin/python3) 00:03:55.981 Program cat found: YES (/usr/bin/cat) 00:03:55.981 Compiler for C supports arguments -march=native: YES 00:03:55.981 Checking for size of "void *" : 8 00:03:55.981 Checking for size of "void *" : 8 (cached) 00:03:55.981 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:55.981 Library m found: YES 00:03:55.981 Library numa found: YES 00:03:55.981 Has header "numaif.h" : YES 00:03:55.981 Library fdt found: NO 00:03:55.981 Library execinfo found: NO 00:03:55.981 Has header "execinfo.h" : YES 00:03:55.981 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:55.981 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:55.981 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:55.981 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:55.982 Run-time dependency openssl found: YES 3.1.1 00:03:55.982 Run-time dependency libpcap found: YES 1.10.4 00:03:55.982 Has header "pcap.h" with dependency libpcap: YES 00:03:55.982 Compiler for C supports arguments -Wcast-qual: YES 00:03:55.982 Compiler for C supports arguments -Wdeprecated: YES 00:03:55.982 Compiler for C supports arguments -Wformat: YES 00:03:55.982 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:55.982 Compiler for C supports arguments -Wformat-security: NO 00:03:55.982 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:55.982 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:55.982 Compiler for C supports arguments -Wnested-externs: YES 00:03:55.982 Compiler for C supports arguments -Wold-style-definition: YES 00:03:55.982 Compiler for C supports arguments -Wpointer-arith: YES 00:03:55.982 Compiler for C supports arguments -Wsign-compare: YES 00:03:55.982 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:55.982 Compiler for C supports arguments -Wundef: YES 00:03:55.982 Compiler for C supports arguments -Wwrite-strings: YES 00:03:55.982 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:55.982 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:55.982 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:55.982 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:55.982 Program objdump found: YES (/usr/bin/objdump) 00:03:55.982 Compiler for C supports arguments -mavx512f: YES 00:03:55.982 Checking if "AVX512 checking" compiles: YES 00:03:55.982 Fetching value of define "__SSE4_2__" : 1 00:03:55.982 Fetching value of define "__AES__" : 1 00:03:55.982 Fetching value of define "__AVX__" : 1 00:03:55.982 Fetching value of define "__AVX2__" : 1 00:03:55.982 Fetching value of define "__AVX512BW__" : 1 00:03:55.982 Fetching value of define "__AVX512CD__" : 1 00:03:55.982 Fetching value of define "__AVX512DQ__" : 1 00:03:55.982 Fetching value of define "__AVX512F__" : 1 00:03:55.982 Fetching value of define "__AVX512VL__" : 1 00:03:55.982 Fetching value of define "__PCLMUL__" : 1 00:03:55.982 Fetching value of define "__RDRND__" : 1 00:03:55.982 Fetching value of define "__RDSEED__" : 1 00:03:55.982 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:55.982 Fetching value of define "__znver1__" : (undefined) 00:03:55.982 Fetching value of define "__znver2__" : (undefined) 00:03:55.982 Fetching value of define "__znver3__" : (undefined) 00:03:55.982 Fetching value of define "__znver4__" : (undefined) 00:03:55.982 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:55.982 Message: lib/log: Defining dependency "log" 00:03:55.982 Message: lib/kvargs: Defining dependency "kvargs" 00:03:55.982 Message: lib/telemetry: Defining dependency "telemetry" 00:03:55.982 Checking for function "getentropy" : NO 00:03:55.982 Message: lib/eal: Defining dependency "eal" 00:03:55.982 Message: lib/ring: Defining dependency "ring" 00:03:55.982 Message: lib/rcu: Defining dependency "rcu" 00:03:55.982 Message: lib/mempool: Defining dependency "mempool" 00:03:55.982 Message: lib/mbuf: Defining dependency "mbuf" 00:03:55.982 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:55.982 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:55.982 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:55.982 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:55.982 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:55.982 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:55.982 Compiler for C supports arguments -mpclmul: YES 00:03:55.982 Compiler for C supports arguments -maes: YES 00:03:55.982 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:55.982 Compiler for C supports arguments -mavx512bw: YES 00:03:55.982 Compiler for C supports arguments -mavx512dq: YES 00:03:55.982 Compiler for C supports arguments -mavx512vl: YES 00:03:55.982 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:55.982 Compiler for C supports arguments -mavx2: YES 00:03:55.982 Compiler for C supports arguments -mavx: YES 00:03:55.982 Message: lib/net: Defining dependency "net" 00:03:55.982 Message: lib/meter: Defining dependency "meter" 00:03:55.982 Message: lib/ethdev: Defining dependency "ethdev" 00:03:55.982 Message: lib/pci: Defining dependency "pci" 00:03:55.982 Message: lib/cmdline: Defining dependency "cmdline" 00:03:55.982 Message: lib/hash: Defining dependency "hash" 00:03:55.982 Message: lib/timer: Defining dependency "timer" 00:03:55.982 Message: lib/compressdev: Defining dependency "compressdev" 00:03:55.982 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:55.982 Message: lib/dmadev: Defining dependency "dmadev" 00:03:55.982 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:55.982 Message: lib/power: Defining dependency "power" 00:03:55.982 Message: lib/reorder: Defining dependency "reorder" 00:03:55.982 Message: lib/security: Defining dependency "security" 00:03:55.982 Has header "linux/userfaultfd.h" : YES 00:03:55.982 Has header "linux/vduse.h" : YES 00:03:55.982 Message: lib/vhost: Defining dependency "vhost" 00:03:55.982 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:55.982 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:55.982 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:55.982 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:55.982 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:55.982 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:55.982 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:55.982 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:55.982 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:55.982 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:55.982 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:55.982 Configuring doxy-api-html.conf using configuration 00:03:55.982 Configuring doxy-api-man.conf using configuration 00:03:55.982 Program mandb found: YES (/usr/bin/mandb) 00:03:55.982 Program sphinx-build found: NO 00:03:55.982 Configuring rte_build_config.h using configuration 00:03:55.982 Message: 00:03:55.982 ================= 00:03:55.982 Applications Enabled 00:03:55.982 ================= 00:03:55.982 00:03:55.982 apps: 00:03:55.982 00:03:55.982 00:03:55.982 Message: 00:03:55.982 ================= 00:03:55.982 Libraries Enabled 00:03:55.982 ================= 00:03:55.982 00:03:55.982 libs: 00:03:55.982 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:55.982 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:55.982 cryptodev, dmadev, power, reorder, security, vhost, 00:03:55.982 00:03:55.982 Message: 00:03:55.982 =============== 00:03:55.982 Drivers Enabled 00:03:55.982 =============== 00:03:55.982 00:03:55.982 common: 00:03:55.982 00:03:55.982 bus: 00:03:55.982 pci, vdev, 00:03:55.982 mempool: 00:03:55.982 ring, 00:03:55.982 dma: 00:03:55.982 00:03:55.982 net: 00:03:55.982 00:03:55.982 crypto: 00:03:55.982 00:03:55.982 compress: 00:03:55.982 00:03:55.982 vdpa: 00:03:55.982 00:03:55.982 00:03:55.982 Message: 00:03:55.982 ================= 00:03:55.982 Content Skipped 00:03:55.982 ================= 00:03:55.982 00:03:55.982 apps: 00:03:55.982 dumpcap: explicitly disabled via build config 00:03:55.982 graph: explicitly disabled via build config 00:03:55.982 pdump: explicitly disabled via build config 00:03:55.982 proc-info: explicitly disabled via build config 00:03:55.982 test-acl: explicitly disabled via build config 00:03:55.982 test-bbdev: explicitly disabled via build config 00:03:55.982 test-cmdline: explicitly disabled via build config 00:03:55.982 test-compress-perf: explicitly disabled via build config 00:03:55.982 test-crypto-perf: explicitly disabled via build config 00:03:55.982 test-dma-perf: explicitly disabled via build config 00:03:55.982 test-eventdev: explicitly disabled via build config 00:03:55.982 test-fib: explicitly disabled via build config 00:03:55.982 test-flow-perf: explicitly disabled via build config 00:03:55.982 test-gpudev: explicitly disabled via build config 00:03:55.982 test-mldev: explicitly disabled via build config 00:03:55.982 test-pipeline: explicitly disabled via build config 00:03:55.982 test-pmd: explicitly disabled via build config 00:03:55.982 test-regex: explicitly disabled via build config 00:03:55.982 test-sad: explicitly disabled via build config 00:03:55.982 test-security-perf: explicitly disabled via build config 00:03:55.982 00:03:55.982 libs: 00:03:55.983 argparse: explicitly disabled via build config 00:03:55.983 metrics: explicitly disabled via build config 00:03:55.983 acl: explicitly disabled via build config 00:03:55.983 bbdev: explicitly disabled via build config 00:03:55.983 bitratestats: explicitly disabled via build config 00:03:55.983 bpf: explicitly disabled via build config 00:03:55.983 cfgfile: explicitly disabled via build config 00:03:55.983 distributor: explicitly disabled via build config 00:03:55.983 efd: explicitly disabled via build config 00:03:55.983 eventdev: explicitly disabled via build config 00:03:55.983 dispatcher: explicitly disabled via build config 00:03:55.983 gpudev: explicitly disabled via build config 00:03:55.983 gro: explicitly disabled via build config 00:03:55.983 gso: explicitly disabled via build config 00:03:55.983 ip_frag: explicitly disabled via build config 00:03:55.983 jobstats: explicitly disabled via build config 00:03:55.983 latencystats: explicitly disabled via build config 00:03:55.983 lpm: explicitly disabled via build config 00:03:55.983 member: explicitly disabled via build config 00:03:55.983 pcapng: explicitly disabled via build config 00:03:55.983 rawdev: explicitly disabled via build config 00:03:55.983 regexdev: explicitly disabled via build config 00:03:55.983 mldev: explicitly disabled via build config 00:03:55.983 rib: explicitly disabled via build config 00:03:55.983 sched: explicitly disabled via build config 00:03:55.983 stack: explicitly disabled via build config 00:03:55.983 ipsec: explicitly disabled via build config 00:03:55.983 pdcp: explicitly disabled via build config 00:03:55.983 fib: explicitly disabled via build config 00:03:55.983 port: explicitly disabled via build config 00:03:55.983 pdump: explicitly disabled via build config 00:03:55.983 table: explicitly disabled via build config 00:03:55.983 pipeline: explicitly disabled via build config 00:03:55.983 graph: explicitly disabled via build config 00:03:55.983 node: explicitly disabled via build config 00:03:55.983 00:03:55.983 drivers: 00:03:55.983 common/cpt: not in enabled drivers build config 00:03:55.983 common/dpaax: not in enabled drivers build config 00:03:55.983 common/iavf: not in enabled drivers build config 00:03:55.983 common/idpf: not in enabled drivers build config 00:03:55.983 common/ionic: not in enabled drivers build config 00:03:55.983 common/mvep: not in enabled drivers build config 00:03:55.983 common/octeontx: not in enabled drivers build config 00:03:55.983 bus/auxiliary: not in enabled drivers build config 00:03:55.983 bus/cdx: not in enabled drivers build config 00:03:55.983 bus/dpaa: not in enabled drivers build config 00:03:55.983 bus/fslmc: not in enabled drivers build config 00:03:55.983 bus/ifpga: not in enabled drivers build config 00:03:55.983 bus/platform: not in enabled drivers build config 00:03:55.983 bus/uacce: not in enabled drivers build config 00:03:55.983 bus/vmbus: not in enabled drivers build config 00:03:55.983 common/cnxk: not in enabled drivers build config 00:03:55.983 common/mlx5: not in enabled drivers build config 00:03:55.983 common/nfp: not in enabled drivers build config 00:03:55.983 common/nitrox: not in enabled drivers build config 00:03:55.983 common/qat: not in enabled drivers build config 00:03:55.983 common/sfc_efx: not in enabled drivers build config 00:03:55.983 mempool/bucket: not in enabled drivers build config 00:03:55.983 mempool/cnxk: not in enabled drivers build config 00:03:55.983 mempool/dpaa: not in enabled drivers build config 00:03:55.983 mempool/dpaa2: not in enabled drivers build config 00:03:55.983 mempool/octeontx: not in enabled drivers build config 00:03:55.983 mempool/stack: not in enabled drivers build config 00:03:55.983 dma/cnxk: not in enabled drivers build config 00:03:55.983 dma/dpaa: not in enabled drivers build config 00:03:55.983 dma/dpaa2: not in enabled drivers build config 00:03:55.983 dma/hisilicon: not in enabled drivers build config 00:03:55.983 dma/idxd: not in enabled drivers build config 00:03:55.983 dma/ioat: not in enabled drivers build config 00:03:55.983 dma/skeleton: not in enabled drivers build config 00:03:55.983 net/af_packet: not in enabled drivers build config 00:03:55.983 net/af_xdp: not in enabled drivers build config 00:03:55.983 net/ark: not in enabled drivers build config 00:03:55.983 net/atlantic: not in enabled drivers build config 00:03:55.983 net/avp: not in enabled drivers build config 00:03:55.983 net/axgbe: not in enabled drivers build config 00:03:55.983 net/bnx2x: not in enabled drivers build config 00:03:55.983 net/bnxt: not in enabled drivers build config 00:03:55.983 net/bonding: not in enabled drivers build config 00:03:55.983 net/cnxk: not in enabled drivers build config 00:03:55.983 net/cpfl: not in enabled drivers build config 00:03:55.983 net/cxgbe: not in enabled drivers build config 00:03:55.983 net/dpaa: not in enabled drivers build config 00:03:55.983 net/dpaa2: not in enabled drivers build config 00:03:55.983 net/e1000: not in enabled drivers build config 00:03:55.983 net/ena: not in enabled drivers build config 00:03:55.983 net/enetc: not in enabled drivers build config 00:03:55.983 net/enetfec: not in enabled drivers build config 00:03:55.983 net/enic: not in enabled drivers build config 00:03:55.983 net/failsafe: not in enabled drivers build config 00:03:55.983 net/fm10k: not in enabled drivers build config 00:03:55.983 net/gve: not in enabled drivers build config 00:03:55.983 net/hinic: not in enabled drivers build config 00:03:55.983 net/hns3: not in enabled drivers build config 00:03:55.983 net/i40e: not in enabled drivers build config 00:03:55.983 net/iavf: not in enabled drivers build config 00:03:55.983 net/ice: not in enabled drivers build config 00:03:55.983 net/idpf: not in enabled drivers build config 00:03:55.983 net/igc: not in enabled drivers build config 00:03:55.983 net/ionic: not in enabled drivers build config 00:03:55.983 net/ipn3ke: not in enabled drivers build config 00:03:55.983 net/ixgbe: not in enabled drivers build config 00:03:55.983 net/mana: not in enabled drivers build config 00:03:55.983 net/memif: not in enabled drivers build config 00:03:55.983 net/mlx4: not in enabled drivers build config 00:03:55.983 net/mlx5: not in enabled drivers build config 00:03:55.983 net/mvneta: not in enabled drivers build config 00:03:55.983 net/mvpp2: not in enabled drivers build config 00:03:55.983 net/netvsc: not in enabled drivers build config 00:03:55.983 net/nfb: not in enabled drivers build config 00:03:55.983 net/nfp: not in enabled drivers build config 00:03:55.983 net/ngbe: not in enabled drivers build config 00:03:55.983 net/null: not in enabled drivers build config 00:03:55.983 net/octeontx: not in enabled drivers build config 00:03:55.983 net/octeon_ep: not in enabled drivers build config 00:03:55.983 net/pcap: not in enabled drivers build config 00:03:55.983 net/pfe: not in enabled drivers build config 00:03:55.983 net/qede: not in enabled drivers build config 00:03:55.983 net/ring: not in enabled drivers build config 00:03:55.983 net/sfc: not in enabled drivers build config 00:03:55.983 net/softnic: not in enabled drivers build config 00:03:55.983 net/tap: not in enabled drivers build config 00:03:55.983 net/thunderx: not in enabled drivers build config 00:03:55.983 net/txgbe: not in enabled drivers build config 00:03:55.983 net/vdev_netvsc: not in enabled drivers build config 00:03:55.983 net/vhost: not in enabled drivers build config 00:03:55.983 net/virtio: not in enabled drivers build config 00:03:55.983 net/vmxnet3: not in enabled drivers build config 00:03:55.983 raw/*: missing internal dependency, "rawdev" 00:03:55.983 crypto/armv8: not in enabled drivers build config 00:03:55.983 crypto/bcmfs: not in enabled drivers build config 00:03:55.983 crypto/caam_jr: not in enabled drivers build config 00:03:55.983 crypto/ccp: not in enabled drivers build config 00:03:55.983 crypto/cnxk: not in enabled drivers build config 00:03:55.983 crypto/dpaa_sec: not in enabled drivers build config 00:03:55.983 crypto/dpaa2_sec: not in enabled drivers build config 00:03:55.983 crypto/ipsec_mb: not in enabled drivers build config 00:03:55.983 crypto/mlx5: not in enabled drivers build config 00:03:55.983 crypto/mvsam: not in enabled drivers build config 00:03:55.983 crypto/nitrox: not in enabled drivers build config 00:03:55.983 crypto/null: not in enabled drivers build config 00:03:55.983 crypto/octeontx: not in enabled drivers build config 00:03:55.983 crypto/openssl: not in enabled drivers build config 00:03:55.983 crypto/scheduler: not in enabled drivers build config 00:03:55.983 crypto/uadk: not in enabled drivers build config 00:03:55.983 crypto/virtio: not in enabled drivers build config 00:03:55.984 compress/isal: not in enabled drivers build config 00:03:55.984 compress/mlx5: not in enabled drivers build config 00:03:55.984 compress/nitrox: not in enabled drivers build config 00:03:55.984 compress/octeontx: not in enabled drivers build config 00:03:55.984 compress/zlib: not in enabled drivers build config 00:03:55.984 regex/*: missing internal dependency, "regexdev" 00:03:55.984 ml/*: missing internal dependency, "mldev" 00:03:55.984 vdpa/ifc: not in enabled drivers build config 00:03:55.984 vdpa/mlx5: not in enabled drivers build config 00:03:55.984 vdpa/nfp: not in enabled drivers build config 00:03:55.984 vdpa/sfc: not in enabled drivers build config 00:03:55.984 event/*: missing internal dependency, "eventdev" 00:03:55.984 baseband/*: missing internal dependency, "bbdev" 00:03:55.984 gpu/*: missing internal dependency, "gpudev" 00:03:55.984 00:03:55.984 00:03:55.984 Build targets in project: 84 00:03:55.984 00:03:55.984 DPDK 24.03.0 00:03:55.984 00:03:55.984 User defined options 00:03:55.984 buildtype : debug 00:03:55.984 default_library : shared 00:03:55.984 libdir : lib 00:03:55.984 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:55.984 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:55.984 c_link_args : 00:03:55.984 cpu_instruction_set: native 00:03:55.984 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:55.984 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:55.984 enable_docs : false 00:03:55.984 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:55.984 enable_kmods : false 00:03:55.984 max_lcores : 128 00:03:55.984 tests : false 00:03:55.984 00:03:55.984 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:55.984 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:55.984 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:55.984 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:55.984 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:55.984 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:55.984 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:55.984 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:55.984 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:55.984 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:55.984 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:55.984 [10/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:55.984 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:55.984 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:55.984 [13/267] Linking static target lib/librte_kvargs.a 00:03:55.984 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:55.984 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:55.984 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:55.984 [17/267] Linking static target lib/librte_log.a 00:03:55.984 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:55.984 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:55.984 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:55.984 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:55.984 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:55.984 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:55.984 [24/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:55.984 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:55.984 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:55.984 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:55.984 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:55.984 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:55.984 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:56.247 [31/267] Linking static target lib/librte_pci.a 00:03:56.247 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:56.247 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:56.247 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:56.247 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:56.247 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:56.247 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:56.247 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:56.247 [39/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.247 [40/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:56.247 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:56.508 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:56.508 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:56.508 [44/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.508 [45/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:56.508 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:56.508 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:56.508 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:56.508 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:56.508 [50/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:56.508 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:56.508 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:56.508 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:56.508 [54/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:56.508 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:56.508 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:56.508 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:56.508 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:56.508 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:56.508 [60/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:56.508 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:56.508 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:56.508 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:56.508 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:56.508 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:56.508 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:56.508 [67/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:56.508 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:56.508 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:56.508 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:56.508 [71/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:56.508 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:56.508 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:56.508 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:56.508 [75/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:56.508 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:56.508 [77/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:56.508 [78/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:56.508 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:56.508 [80/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:56.508 [81/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:56.508 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:56.508 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:56.508 [84/267] Linking static target lib/librte_ring.a 00:03:56.508 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:56.508 [86/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:56.508 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:56.508 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:56.508 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:56.508 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:56.508 [91/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:56.508 [92/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:56.508 [93/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:56.508 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:56.508 [95/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:56.508 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:56.508 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:56.508 [98/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:56.509 [99/267] Linking static target lib/librte_timer.a 00:03:56.509 [100/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:56.509 [101/267] Linking static target lib/librte_meter.a 00:03:56.509 [102/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:56.509 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:56.509 [104/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:56.509 [105/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:56.509 [106/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:56.509 [107/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:56.509 [108/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:56.509 [109/267] Linking static target lib/librte_cmdline.a 00:03:56.509 [110/267] Linking static target lib/librte_telemetry.a 00:03:56.509 [111/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:56.509 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:56.509 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:56.509 [114/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:56.509 [115/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:56.509 [116/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:56.509 [117/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:56.509 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:56.509 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:56.509 [120/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:56.509 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:56.509 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:56.509 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:56.509 [124/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:56.509 [125/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:56.509 [126/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:56.509 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:56.509 [128/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:56.509 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:56.509 [130/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:56.509 [131/267] Linking static target lib/librte_net.a 00:03:56.509 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:56.509 [133/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:56.509 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:56.509 [135/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:56.509 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:56.509 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:56.509 [138/267] Linking static target lib/librte_mempool.a 00:03:56.509 [139/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:56.509 [140/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:56.509 [141/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:56.509 [142/267] Linking static target lib/librte_dmadev.a 00:03:56.509 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:56.509 [144/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:56.509 [145/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.509 [146/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:56.509 [147/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:56.509 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:56.509 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:56.509 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:56.509 [151/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:56.770 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:56.770 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:56.770 [154/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:56.770 [155/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:56.770 [156/267] Linking static target lib/librte_compressdev.a 00:03:56.770 [157/267] Linking static target lib/librte_rcu.a 00:03:56.770 [158/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:56.770 [159/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:56.770 [160/267] Linking static target lib/librte_power.a 00:03:56.770 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:56.770 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:56.770 [163/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:56.770 [164/267] Linking target lib/librte_log.so.24.1 00:03:56.770 [165/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:56.770 [166/267] Linking static target lib/librte_reorder.a 00:03:56.770 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:56.770 [168/267] Linking static target lib/librte_security.a 00:03:56.770 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:56.770 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:56.770 [171/267] Linking static target lib/librte_eal.a 00:03:56.770 [172/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:56.770 [173/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:56.770 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:56.770 [175/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:56.770 [176/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:56.770 [177/267] Linking static target lib/librte_mbuf.a 00:03:56.770 [178/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:56.770 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:56.770 [180/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:56.770 [181/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:56.770 [182/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:56.770 [183/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.770 [184/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.770 [185/267] Linking target lib/librte_kvargs.so.24.1 00:03:56.770 [186/267] Linking static target drivers/librte_bus_vdev.a 00:03:56.770 [187/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:56.770 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:56.770 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:56.770 [190/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:56.770 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:56.770 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:57.031 [193/267] Linking static target lib/librte_hash.a 00:03:57.031 [194/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.031 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:57.031 [196/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:57.031 [197/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:57.031 [198/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:57.031 [199/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:57.031 [200/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.031 [201/267] Linking static target drivers/librte_bus_pci.a 00:03:57.031 [202/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:57.031 [203/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:57.031 [204/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:57.031 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.031 [206/267] Linking static target drivers/librte_mempool_ring.a 00:03:57.031 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:57.031 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:57.031 [209/267] Linking static target lib/librte_cryptodev.a 00:03:57.293 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.293 [211/267] Linking target lib/librte_telemetry.so.24.1 00:03:57.293 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.293 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.293 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.293 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:57.293 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.554 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:57.554 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.554 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:57.554 [220/267] Linking static target lib/librte_ethdev.a 00:03:57.554 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.554 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.816 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.816 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.816 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.078 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.339 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:58.601 [228/267] Linking static target lib/librte_vhost.a 00:03:59.173 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.560 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.146 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.087 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.348 [233/267] Linking target lib/librte_eal.so.24.1 00:04:08.348 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:08.348 [235/267] Linking target lib/librte_ring.so.24.1 00:04:08.348 [236/267] Linking target lib/librte_meter.so.24.1 00:04:08.348 [237/267] Linking target lib/librte_pci.so.24.1 00:04:08.348 [238/267] Linking target lib/librte_timer.so.24.1 00:04:08.348 [239/267] Linking target lib/librte_dmadev.so.24.1 00:04:08.348 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:04:08.609 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:08.609 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:08.609 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:08.609 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:08.609 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:08.609 [246/267] Linking target lib/librte_rcu.so.24.1 00:04:08.609 [247/267] Linking target lib/librte_mempool.so.24.1 00:04:08.609 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:04:08.609 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:08.609 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:08.869 [251/267] Linking target lib/librte_mbuf.so.24.1 00:04:08.869 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:04:08.869 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:08.869 [254/267] Linking target lib/librte_net.so.24.1 00:04:08.869 [255/267] Linking target lib/librte_reorder.so.24.1 00:04:08.869 [256/267] Linking target lib/librte_compressdev.so.24.1 00:04:08.869 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:04:09.129 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:09.129 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:09.129 [260/267] Linking target lib/librte_hash.so.24.1 00:04:09.129 [261/267] Linking target lib/librte_ethdev.so.24.1 00:04:09.129 [262/267] Linking target lib/librte_cmdline.so.24.1 00:04:09.129 [263/267] Linking target lib/librte_security.so.24.1 00:04:09.129 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:09.129 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:09.390 [266/267] Linking target lib/librte_power.so.24.1 00:04:09.390 [267/267] Linking target lib/librte_vhost.so.24.1 00:04:09.390 INFO: autodetecting backend as ninja 00:04:09.390 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:04:13.592 CC lib/ut_mock/mock.o 00:04:13.592 CC lib/ut/ut.o 00:04:13.592 CC lib/log/log.o 00:04:13.592 CC lib/log/log_flags.o 00:04:13.592 CC lib/log/log_deprecated.o 00:04:13.592 LIB libspdk_log.a 00:04:13.592 LIB libspdk_ut_mock.a 00:04:13.592 SO libspdk_ut_mock.so.6.0 00:04:13.592 SO libspdk_log.so.7.0 00:04:13.592 LIB libspdk_ut.a 00:04:13.592 SO libspdk_ut.so.2.0 00:04:13.592 SYMLINK libspdk_ut_mock.so 00:04:13.592 SYMLINK libspdk_log.so 00:04:13.592 SYMLINK libspdk_ut.so 00:04:13.592 CC lib/dma/dma.o 00:04:13.592 CC lib/ioat/ioat.o 00:04:13.592 CC lib/util/base64.o 00:04:13.592 CXX lib/trace_parser/trace.o 00:04:13.592 CC lib/util/bit_array.o 00:04:13.592 CC lib/util/cpuset.o 00:04:13.592 CC lib/util/crc16.o 00:04:13.592 CC lib/util/crc32.o 00:04:13.592 CC lib/util/crc32c.o 00:04:13.592 CC lib/util/crc32_ieee.o 00:04:13.592 CC lib/util/crc64.o 00:04:13.592 CC lib/util/dif.o 00:04:13.592 CC lib/util/fd.o 00:04:13.592 CC lib/util/fd_group.o 00:04:13.592 CC lib/util/file.o 00:04:13.592 CC lib/util/hexlify.o 00:04:13.592 CC lib/util/iov.o 00:04:13.592 CC lib/util/math.o 00:04:13.592 CC lib/util/net.o 00:04:13.592 CC lib/util/pipe.o 00:04:13.592 CC lib/util/strerror_tls.o 00:04:13.592 CC lib/util/string.o 00:04:13.592 CC lib/util/uuid.o 00:04:13.592 CC lib/util/xor.o 00:04:13.592 CC lib/util/zipf.o 00:04:13.592 CC lib/util/md5.o 00:04:13.852 CC lib/vfio_user/host/vfio_user_pci.o 00:04:13.852 CC lib/vfio_user/host/vfio_user.o 00:04:13.852 LIB libspdk_dma.a 00:04:13.852 SO libspdk_dma.so.5.0 00:04:13.852 LIB libspdk_ioat.a 00:04:13.852 SYMLINK libspdk_dma.so 00:04:14.112 SO libspdk_ioat.so.7.0 00:04:14.112 SYMLINK libspdk_ioat.so 00:04:14.112 LIB libspdk_vfio_user.a 00:04:14.112 LIB libspdk_util.a 00:04:14.112 SO libspdk_vfio_user.so.5.0 00:04:14.112 SYMLINK libspdk_vfio_user.so 00:04:14.112 SO libspdk_util.so.10.0 00:04:14.373 SYMLINK libspdk_util.so 00:04:14.373 LIB libspdk_trace_parser.a 00:04:14.634 SO libspdk_trace_parser.so.6.0 00:04:14.634 SYMLINK libspdk_trace_parser.so 00:04:14.634 CC lib/vmd/vmd.o 00:04:14.634 CC lib/conf/conf.o 00:04:14.634 CC lib/vmd/led.o 00:04:14.634 CC lib/json/json_parse.o 00:04:14.634 CC lib/json/json_util.o 00:04:14.634 CC lib/idxd/idxd.o 00:04:14.634 CC lib/json/json_write.o 00:04:14.635 CC lib/env_dpdk/env.o 00:04:14.635 CC lib/idxd/idxd_user.o 00:04:14.635 CC lib/env_dpdk/memory.o 00:04:14.635 CC lib/idxd/idxd_kernel.o 00:04:14.635 CC lib/rdma_utils/rdma_utils.o 00:04:14.635 CC lib/env_dpdk/pci.o 00:04:14.635 CC lib/env_dpdk/init.o 00:04:14.635 CC lib/env_dpdk/threads.o 00:04:14.635 CC lib/env_dpdk/pci_ioat.o 00:04:14.635 CC lib/rdma_provider/common.o 00:04:14.635 CC lib/env_dpdk/pci_virtio.o 00:04:14.635 CC lib/env_dpdk/pci_vmd.o 00:04:14.635 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:14.635 CC lib/env_dpdk/pci_idxd.o 00:04:14.635 CC lib/env_dpdk/pci_event.o 00:04:14.635 CC lib/env_dpdk/sigbus_handler.o 00:04:14.635 CC lib/env_dpdk/pci_dpdk.o 00:04:14.635 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:14.635 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:14.896 LIB libspdk_rdma_provider.a 00:04:14.896 LIB libspdk_conf.a 00:04:14.896 SO libspdk_rdma_provider.so.6.0 00:04:14.896 SO libspdk_conf.so.6.0 00:04:15.157 LIB libspdk_rdma_utils.a 00:04:15.157 LIB libspdk_json.a 00:04:15.157 SYMLINK libspdk_conf.so 00:04:15.157 SYMLINK libspdk_rdma_provider.so 00:04:15.157 SO libspdk_rdma_utils.so.1.0 00:04:15.157 SO libspdk_json.so.6.0 00:04:15.157 SYMLINK libspdk_rdma_utils.so 00:04:15.157 SYMLINK libspdk_json.so 00:04:15.157 LIB libspdk_idxd.a 00:04:15.418 LIB libspdk_vmd.a 00:04:15.418 SO libspdk_idxd.so.12.1 00:04:15.418 SO libspdk_vmd.so.6.0 00:04:15.418 SYMLINK libspdk_idxd.so 00:04:15.418 SYMLINK libspdk_vmd.so 00:04:15.418 CC lib/jsonrpc/jsonrpc_server.o 00:04:15.418 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:15.418 CC lib/jsonrpc/jsonrpc_client.o 00:04:15.418 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:15.679 LIB libspdk_jsonrpc.a 00:04:15.940 SO libspdk_jsonrpc.so.6.0 00:04:15.940 SYMLINK libspdk_jsonrpc.so 00:04:15.940 LIB libspdk_env_dpdk.a 00:04:15.940 SO libspdk_env_dpdk.so.15.0 00:04:16.201 SYMLINK libspdk_env_dpdk.so 00:04:16.201 CC lib/rpc/rpc.o 00:04:16.462 LIB libspdk_rpc.a 00:04:16.462 SO libspdk_rpc.so.6.0 00:04:16.723 SYMLINK libspdk_rpc.so 00:04:16.983 CC lib/trace/trace.o 00:04:16.983 CC lib/trace/trace_flags.o 00:04:16.983 CC lib/keyring/keyring.o 00:04:16.983 CC lib/keyring/keyring_rpc.o 00:04:16.983 CC lib/trace/trace_rpc.o 00:04:16.983 CC lib/notify/notify.o 00:04:16.983 CC lib/notify/notify_rpc.o 00:04:17.243 LIB libspdk_notify.a 00:04:17.244 SO libspdk_notify.so.6.0 00:04:17.244 LIB libspdk_keyring.a 00:04:17.244 LIB libspdk_trace.a 00:04:17.244 SO libspdk_keyring.so.2.0 00:04:17.244 SYMLINK libspdk_notify.so 00:04:17.244 SO libspdk_trace.so.11.0 00:04:17.244 SYMLINK libspdk_keyring.so 00:04:17.244 SYMLINK libspdk_trace.so 00:04:17.814 CC lib/thread/thread.o 00:04:17.814 CC lib/sock/sock.o 00:04:17.814 CC lib/thread/iobuf.o 00:04:17.814 CC lib/sock/sock_rpc.o 00:04:18.077 LIB libspdk_sock.a 00:04:18.077 SO libspdk_sock.so.10.0 00:04:18.077 SYMLINK libspdk_sock.so 00:04:18.340 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:18.340 CC lib/nvme/nvme_ctrlr.o 00:04:18.340 CC lib/nvme/nvme_fabric.o 00:04:18.340 CC lib/nvme/nvme_ns_cmd.o 00:04:18.340 CC lib/nvme/nvme_ns.o 00:04:18.340 CC lib/nvme/nvme_pcie_common.o 00:04:18.340 CC lib/nvme/nvme_pcie.o 00:04:18.340 CC lib/nvme/nvme_qpair.o 00:04:18.340 CC lib/nvme/nvme.o 00:04:18.340 CC lib/nvme/nvme_quirks.o 00:04:18.340 CC lib/nvme/nvme_transport.o 00:04:18.340 CC lib/nvme/nvme_discovery.o 00:04:18.340 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:18.340 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:18.340 CC lib/nvme/nvme_tcp.o 00:04:18.340 CC lib/nvme/nvme_opal.o 00:04:18.340 CC lib/nvme/nvme_io_msg.o 00:04:18.340 CC lib/nvme/nvme_poll_group.o 00:04:18.340 CC lib/nvme/nvme_zns.o 00:04:18.340 CC lib/nvme/nvme_stubs.o 00:04:18.340 CC lib/nvme/nvme_auth.o 00:04:18.340 CC lib/nvme/nvme_cuse.o 00:04:18.340 CC lib/nvme/nvme_vfio_user.o 00:04:18.340 CC lib/nvme/nvme_rdma.o 00:04:18.915 LIB libspdk_thread.a 00:04:19.177 SO libspdk_thread.so.10.2 00:04:19.177 SYMLINK libspdk_thread.so 00:04:19.438 CC lib/virtio/virtio.o 00:04:19.438 CC lib/virtio/virtio_vhost_user.o 00:04:19.438 CC lib/virtio/virtio_vfio_user.o 00:04:19.438 CC lib/virtio/virtio_pci.o 00:04:19.438 CC lib/accel/accel.o 00:04:19.438 CC lib/fsdev/fsdev.o 00:04:19.438 CC lib/accel/accel_rpc.o 00:04:19.438 CC lib/init/json_config.o 00:04:19.438 CC lib/accel/accel_sw.o 00:04:19.438 CC lib/fsdev/fsdev_io.o 00:04:19.438 CC lib/init/subsystem.o 00:04:19.438 CC lib/fsdev/fsdev_rpc.o 00:04:19.438 CC lib/vfu_tgt/tgt_endpoint.o 00:04:19.438 CC lib/init/subsystem_rpc.o 00:04:19.438 CC lib/vfu_tgt/tgt_rpc.o 00:04:19.438 CC lib/init/rpc.o 00:04:19.438 CC lib/blob/blobstore.o 00:04:19.438 CC lib/blob/request.o 00:04:19.438 CC lib/blob/zeroes.o 00:04:19.438 CC lib/blob/blob_bs_dev.o 00:04:19.699 LIB libspdk_init.a 00:04:19.699 SO libspdk_init.so.6.0 00:04:19.959 LIB libspdk_vfu_tgt.a 00:04:19.959 LIB libspdk_virtio.a 00:04:19.959 SO libspdk_vfu_tgt.so.3.0 00:04:19.959 SO libspdk_virtio.so.7.0 00:04:19.959 SYMLINK libspdk_init.so 00:04:19.959 SYMLINK libspdk_vfu_tgt.so 00:04:19.959 SYMLINK libspdk_virtio.so 00:04:20.220 LIB libspdk_fsdev.a 00:04:20.220 SO libspdk_fsdev.so.1.0 00:04:20.220 SYMLINK libspdk_fsdev.so 00:04:20.220 CC lib/event/app.o 00:04:20.220 CC lib/event/reactor.o 00:04:20.220 CC lib/event/log_rpc.o 00:04:20.220 CC lib/event/app_rpc.o 00:04:20.220 CC lib/event/scheduler_static.o 00:04:20.481 LIB libspdk_nvme.a 00:04:20.481 LIB libspdk_accel.a 00:04:20.481 SO libspdk_accel.so.16.0 00:04:20.481 SO libspdk_nvme.so.14.0 00:04:20.481 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:20.741 SYMLINK libspdk_accel.so 00:04:20.741 LIB libspdk_event.a 00:04:20.741 SO libspdk_event.so.15.0 00:04:20.741 SYMLINK libspdk_nvme.so 00:04:20.741 SYMLINK libspdk_event.so 00:04:21.002 CC lib/bdev/bdev.o 00:04:21.002 CC lib/bdev/bdev_rpc.o 00:04:21.002 CC lib/bdev/bdev_zone.o 00:04:21.002 CC lib/bdev/part.o 00:04:21.002 CC lib/bdev/scsi_nvme.o 00:04:21.262 LIB libspdk_fuse_dispatcher.a 00:04:21.262 SO libspdk_fuse_dispatcher.so.1.0 00:04:21.262 SYMLINK libspdk_fuse_dispatcher.so 00:04:22.206 LIB libspdk_blob.a 00:04:22.206 SO libspdk_blob.so.11.0 00:04:22.206 SYMLINK libspdk_blob.so 00:04:22.779 CC lib/lvol/lvol.o 00:04:22.779 CC lib/blobfs/blobfs.o 00:04:22.779 CC lib/blobfs/tree.o 00:04:23.350 LIB libspdk_bdev.a 00:04:23.350 SO libspdk_bdev.so.17.0 00:04:23.350 LIB libspdk_blobfs.a 00:04:23.350 SO libspdk_blobfs.so.10.0 00:04:23.350 SYMLINK libspdk_bdev.so 00:04:23.612 LIB libspdk_lvol.a 00:04:23.612 SO libspdk_lvol.so.10.0 00:04:23.612 SYMLINK libspdk_blobfs.so 00:04:23.612 SYMLINK libspdk_lvol.so 00:04:23.876 CC lib/nvmf/ctrlr.o 00:04:23.876 CC lib/nvmf/ctrlr_discovery.o 00:04:23.876 CC lib/nvmf/ctrlr_bdev.o 00:04:23.876 CC lib/nvmf/subsystem.o 00:04:23.876 CC lib/nvmf/nvmf.o 00:04:23.876 CC lib/ftl/ftl_core.o 00:04:23.876 CC lib/ftl/ftl_init.o 00:04:23.876 CC lib/nvmf/transport.o 00:04:23.876 CC lib/nvmf/nvmf_rpc.o 00:04:23.876 CC lib/ftl/ftl_layout.o 00:04:23.876 CC lib/nvmf/tcp.o 00:04:23.876 CC lib/ftl/ftl_debug.o 00:04:23.876 CC lib/ftl/ftl_io.o 00:04:23.876 CC lib/nbd/nbd.o 00:04:23.876 CC lib/nvmf/stubs.o 00:04:23.876 CC lib/ublk/ublk.o 00:04:23.876 CC lib/nvmf/mdns_server.o 00:04:23.876 CC lib/ublk/ublk_rpc.o 00:04:23.876 CC lib/ftl/ftl_sb.o 00:04:23.876 CC lib/nbd/nbd_rpc.o 00:04:23.876 CC lib/nvmf/vfio_user.o 00:04:23.876 CC lib/ftl/ftl_l2p.o 00:04:23.876 CC lib/ftl/ftl_l2p_flat.o 00:04:23.876 CC lib/nvmf/rdma.o 00:04:23.876 CC lib/scsi/dev.o 00:04:23.876 CC lib/ftl/ftl_nv_cache.o 00:04:23.876 CC lib/nvmf/auth.o 00:04:23.876 CC lib/scsi/lun.o 00:04:23.876 CC lib/ftl/ftl_band.o 00:04:23.876 CC lib/scsi/port.o 00:04:23.876 CC lib/ftl/ftl_band_ops.o 00:04:23.876 CC lib/scsi/scsi.o 00:04:23.876 CC lib/ftl/ftl_writer.o 00:04:23.876 CC lib/scsi/scsi_bdev.o 00:04:23.876 CC lib/ftl/ftl_rq.o 00:04:23.876 CC lib/scsi/scsi_pr.o 00:04:23.876 CC lib/ftl/ftl_reloc.o 00:04:23.876 CC lib/scsi/scsi_rpc.o 00:04:23.876 CC lib/ftl/ftl_l2p_cache.o 00:04:23.876 CC lib/scsi/task.o 00:04:23.876 CC lib/ftl/ftl_p2l.o 00:04:23.876 CC lib/ftl/ftl_p2l_log.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:23.876 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:23.876 CC lib/ftl/utils/ftl_conf.o 00:04:23.876 CC lib/ftl/utils/ftl_md.o 00:04:23.876 CC lib/ftl/utils/ftl_mempool.o 00:04:23.876 CC lib/ftl/utils/ftl_bitmap.o 00:04:23.876 CC lib/ftl/utils/ftl_property.o 00:04:23.876 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:23.876 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:23.876 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:23.876 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:23.876 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:23.876 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:23.876 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:23.876 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:23.876 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:23.876 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:23.876 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:23.876 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:23.876 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:23.876 CC lib/ftl/base/ftl_base_bdev.o 00:04:23.876 CC lib/ftl/base/ftl_base_dev.o 00:04:23.876 CC lib/ftl/ftl_trace.o 00:04:24.448 LIB libspdk_nbd.a 00:04:24.448 SO libspdk_nbd.so.7.0 00:04:24.709 SYMLINK libspdk_nbd.so 00:04:24.709 LIB libspdk_scsi.a 00:04:24.709 SO libspdk_scsi.so.9.0 00:04:24.709 LIB libspdk_ublk.a 00:04:24.709 SYMLINK libspdk_scsi.so 00:04:24.709 SO libspdk_ublk.so.3.0 00:04:24.970 SYMLINK libspdk_ublk.so 00:04:24.970 LIB libspdk_ftl.a 00:04:25.231 CC lib/iscsi/conn.o 00:04:25.231 CC lib/iscsi/init_grp.o 00:04:25.231 CC lib/iscsi/iscsi.o 00:04:25.231 CC lib/iscsi/param.o 00:04:25.231 CC lib/iscsi/portal_grp.o 00:04:25.231 CC lib/iscsi/tgt_node.o 00:04:25.231 CC lib/iscsi/iscsi_subsystem.o 00:04:25.231 CC lib/iscsi/iscsi_rpc.o 00:04:25.231 CC lib/iscsi/task.o 00:04:25.231 CC lib/vhost/vhost.o 00:04:25.231 CC lib/vhost/vhost_rpc.o 00:04:25.231 CC lib/vhost/vhost_scsi.o 00:04:25.231 CC lib/vhost/vhost_blk.o 00:04:25.231 CC lib/vhost/rte_vhost_user.o 00:04:25.231 SO libspdk_ftl.so.9.0 00:04:25.492 SYMLINK libspdk_ftl.so 00:04:26.066 LIB libspdk_nvmf.a 00:04:26.066 SO libspdk_nvmf.so.19.0 00:04:26.066 LIB libspdk_vhost.a 00:04:26.066 SO libspdk_vhost.so.8.0 00:04:26.328 SYMLINK libspdk_nvmf.so 00:04:26.328 SYMLINK libspdk_vhost.so 00:04:26.328 LIB libspdk_iscsi.a 00:04:26.328 SO libspdk_iscsi.so.8.0 00:04:26.589 SYMLINK libspdk_iscsi.so 00:04:27.162 CC module/env_dpdk/env_dpdk_rpc.o 00:04:27.162 CC module/vfu_device/vfu_virtio.o 00:04:27.162 CC module/vfu_device/vfu_virtio_blk.o 00:04:27.162 CC module/vfu_device/vfu_virtio_scsi.o 00:04:27.162 CC module/vfu_device/vfu_virtio_rpc.o 00:04:27.162 CC module/vfu_device/vfu_virtio_fs.o 00:04:27.423 CC module/scheduler/gscheduler/gscheduler.o 00:04:27.423 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:27.423 CC module/accel/error/accel_error.o 00:04:27.423 CC module/accel/error/accel_error_rpc.o 00:04:27.423 LIB libspdk_env_dpdk_rpc.a 00:04:27.423 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:27.423 CC module/sock/posix/posix.o 00:04:27.423 CC module/accel/iaa/accel_iaa.o 00:04:27.423 CC module/accel/dsa/accel_dsa.o 00:04:27.423 CC module/fsdev/aio/fsdev_aio.o 00:04:27.423 CC module/accel/iaa/accel_iaa_rpc.o 00:04:27.423 CC module/accel/dsa/accel_dsa_rpc.o 00:04:27.423 CC module/blob/bdev/blob_bdev.o 00:04:27.424 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:27.424 CC module/fsdev/aio/linux_aio_mgr.o 00:04:27.424 CC module/keyring/linux/keyring.o 00:04:27.424 CC module/keyring/linux/keyring_rpc.o 00:04:27.424 CC module/keyring/file/keyring.o 00:04:27.424 CC module/keyring/file/keyring_rpc.o 00:04:27.424 CC module/accel/ioat/accel_ioat.o 00:04:27.424 CC module/accel/ioat/accel_ioat_rpc.o 00:04:27.424 SO libspdk_env_dpdk_rpc.so.6.0 00:04:27.424 SYMLINK libspdk_env_dpdk_rpc.so 00:04:27.424 LIB libspdk_scheduler_gscheduler.a 00:04:27.424 LIB libspdk_accel_error.a 00:04:27.424 LIB libspdk_keyring_file.a 00:04:27.424 LIB libspdk_scheduler_dpdk_governor.a 00:04:27.424 LIB libspdk_keyring_linux.a 00:04:27.424 SO libspdk_scheduler_gscheduler.so.4.0 00:04:27.424 SO libspdk_keyring_file.so.2.0 00:04:27.685 SO libspdk_accel_error.so.2.0 00:04:27.685 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:27.685 LIB libspdk_scheduler_dynamic.a 00:04:27.685 SO libspdk_keyring_linux.so.1.0 00:04:27.685 LIB libspdk_accel_ioat.a 00:04:27.685 LIB libspdk_accel_iaa.a 00:04:27.685 SYMLINK libspdk_scheduler_gscheduler.so 00:04:27.685 SO libspdk_scheduler_dynamic.so.4.0 00:04:27.685 SO libspdk_accel_ioat.so.6.0 00:04:27.685 SO libspdk_accel_iaa.so.3.0 00:04:27.685 LIB libspdk_blob_bdev.a 00:04:27.685 SYMLINK libspdk_keyring_file.so 00:04:27.685 SYMLINK libspdk_accel_error.so 00:04:27.685 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:27.685 LIB libspdk_accel_dsa.a 00:04:27.685 SYMLINK libspdk_keyring_linux.so 00:04:27.685 SO libspdk_blob_bdev.so.11.0 00:04:27.685 SO libspdk_accel_dsa.so.5.0 00:04:27.685 SYMLINK libspdk_scheduler_dynamic.so 00:04:27.685 SYMLINK libspdk_accel_iaa.so 00:04:27.685 SYMLINK libspdk_accel_ioat.so 00:04:27.685 LIB libspdk_vfu_device.a 00:04:27.685 SYMLINK libspdk_blob_bdev.so 00:04:27.685 SYMLINK libspdk_accel_dsa.so 00:04:27.685 SO libspdk_vfu_device.so.3.0 00:04:27.945 SYMLINK libspdk_vfu_device.so 00:04:27.945 LIB libspdk_fsdev_aio.a 00:04:27.945 SO libspdk_fsdev_aio.so.1.0 00:04:27.945 LIB libspdk_sock_posix.a 00:04:27.945 SO libspdk_sock_posix.so.6.0 00:04:27.945 SYMLINK libspdk_fsdev_aio.so 00:04:28.205 SYMLINK libspdk_sock_posix.so 00:04:28.205 CC module/blobfs/bdev/blobfs_bdev.o 00:04:28.205 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:28.205 CC module/bdev/delay/vbdev_delay.o 00:04:28.205 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:28.205 CC module/bdev/null/bdev_null.o 00:04:28.205 CC module/bdev/null/bdev_null_rpc.o 00:04:28.205 CC module/bdev/error/vbdev_error.o 00:04:28.205 CC module/bdev/error/vbdev_error_rpc.o 00:04:28.205 CC module/bdev/lvol/vbdev_lvol.o 00:04:28.205 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:28.205 CC module/bdev/malloc/bdev_malloc.o 00:04:28.205 CC module/bdev/gpt/gpt.o 00:04:28.205 CC module/bdev/nvme/bdev_nvme.o 00:04:28.205 CC module/bdev/passthru/vbdev_passthru.o 00:04:28.205 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:28.205 CC module/bdev/nvme/nvme_rpc.o 00:04:28.205 CC module/bdev/gpt/vbdev_gpt.o 00:04:28.205 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:28.205 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:28.205 CC module/bdev/split/vbdev_split.o 00:04:28.205 CC module/bdev/ftl/bdev_ftl.o 00:04:28.205 CC module/bdev/nvme/bdev_mdns_client.o 00:04:28.205 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:28.205 CC module/bdev/nvme/vbdev_opal.o 00:04:28.205 CC module/bdev/split/vbdev_split_rpc.o 00:04:28.205 CC module/bdev/raid/bdev_raid.o 00:04:28.205 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:28.205 CC module/bdev/raid/bdev_raid_rpc.o 00:04:28.205 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:28.205 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:28.205 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:28.205 CC module/bdev/raid/bdev_raid_sb.o 00:04:28.205 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:28.205 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:28.205 CC module/bdev/raid/raid0.o 00:04:28.205 CC module/bdev/aio/bdev_aio.o 00:04:28.206 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:28.206 CC module/bdev/iscsi/bdev_iscsi.o 00:04:28.206 CC module/bdev/aio/bdev_aio_rpc.o 00:04:28.206 CC module/bdev/raid/concat.o 00:04:28.206 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:28.206 CC module/bdev/raid/raid1.o 00:04:28.464 LIB libspdk_blobfs_bdev.a 00:04:28.723 SO libspdk_blobfs_bdev.so.6.0 00:04:28.723 LIB libspdk_bdev_error.a 00:04:28.723 LIB libspdk_bdev_null.a 00:04:28.723 SYMLINK libspdk_blobfs_bdev.so 00:04:28.723 LIB libspdk_bdev_split.a 00:04:28.723 SO libspdk_bdev_error.so.6.0 00:04:28.723 LIB libspdk_bdev_gpt.a 00:04:28.723 SO libspdk_bdev_null.so.6.0 00:04:28.723 LIB libspdk_bdev_passthru.a 00:04:28.723 SO libspdk_bdev_split.so.6.0 00:04:28.723 SO libspdk_bdev_gpt.so.6.0 00:04:28.723 LIB libspdk_bdev_delay.a 00:04:28.723 LIB libspdk_bdev_ftl.a 00:04:28.723 SO libspdk_bdev_passthru.so.6.0 00:04:28.723 SYMLINK libspdk_bdev_error.so 00:04:28.723 LIB libspdk_bdev_malloc.a 00:04:28.723 LIB libspdk_bdev_aio.a 00:04:28.723 SYMLINK libspdk_bdev_null.so 00:04:28.723 LIB libspdk_bdev_zone_block.a 00:04:28.723 SO libspdk_bdev_delay.so.6.0 00:04:28.723 SO libspdk_bdev_ftl.so.6.0 00:04:28.723 SYMLINK libspdk_bdev_split.so 00:04:28.723 LIB libspdk_bdev_iscsi.a 00:04:28.723 SYMLINK libspdk_bdev_gpt.so 00:04:28.723 SO libspdk_bdev_aio.so.6.0 00:04:28.723 SO libspdk_bdev_malloc.so.6.0 00:04:28.723 SYMLINK libspdk_bdev_passthru.so 00:04:28.723 SO libspdk_bdev_iscsi.so.6.0 00:04:28.723 SO libspdk_bdev_zone_block.so.6.0 00:04:28.983 SYMLINK libspdk_bdev_delay.so 00:04:28.983 SYMLINK libspdk_bdev_ftl.so 00:04:28.984 SYMLINK libspdk_bdev_aio.so 00:04:28.984 SYMLINK libspdk_bdev_malloc.so 00:04:28.984 LIB libspdk_bdev_lvol.a 00:04:28.984 SYMLINK libspdk_bdev_iscsi.so 00:04:28.984 SYMLINK libspdk_bdev_zone_block.so 00:04:28.984 LIB libspdk_bdev_virtio.a 00:04:28.984 SO libspdk_bdev_lvol.so.6.0 00:04:28.984 SO libspdk_bdev_virtio.so.6.0 00:04:28.984 SYMLINK libspdk_bdev_lvol.so 00:04:28.984 SYMLINK libspdk_bdev_virtio.so 00:04:29.243 LIB libspdk_bdev_raid.a 00:04:29.504 SO libspdk_bdev_raid.so.6.0 00:04:29.504 SYMLINK libspdk_bdev_raid.so 00:04:30.445 LIB libspdk_bdev_nvme.a 00:04:30.445 SO libspdk_bdev_nvme.so.7.0 00:04:30.705 SYMLINK libspdk_bdev_nvme.so 00:04:31.277 CC module/event/subsystems/vmd/vmd.o 00:04:31.277 CC module/event/subsystems/iobuf/iobuf.o 00:04:31.277 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:31.277 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:31.277 CC module/event/subsystems/sock/sock.o 00:04:31.277 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:31.277 CC module/event/subsystems/scheduler/scheduler.o 00:04:31.277 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:31.277 CC module/event/subsystems/keyring/keyring.o 00:04:31.277 CC module/event/subsystems/fsdev/fsdev.o 00:04:31.538 LIB libspdk_event_scheduler.a 00:04:31.538 LIB libspdk_event_fsdev.a 00:04:31.538 LIB libspdk_event_iobuf.a 00:04:31.538 LIB libspdk_event_vmd.a 00:04:31.538 LIB libspdk_event_vhost_blk.a 00:04:31.538 LIB libspdk_event_keyring.a 00:04:31.538 LIB libspdk_event_vfu_tgt.a 00:04:31.538 LIB libspdk_event_sock.a 00:04:31.538 SO libspdk_event_scheduler.so.4.0 00:04:31.538 SO libspdk_event_fsdev.so.1.0 00:04:31.538 SO libspdk_event_vhost_blk.so.3.0 00:04:31.538 SO libspdk_event_iobuf.so.3.0 00:04:31.538 SO libspdk_event_keyring.so.1.0 00:04:31.538 SO libspdk_event_vmd.so.6.0 00:04:31.538 SO libspdk_event_sock.so.5.0 00:04:31.538 SO libspdk_event_vfu_tgt.so.3.0 00:04:31.538 SYMLINK libspdk_event_fsdev.so 00:04:31.538 SYMLINK libspdk_event_scheduler.so 00:04:31.538 SYMLINK libspdk_event_vhost_blk.so 00:04:31.538 SYMLINK libspdk_event_sock.so 00:04:31.538 SYMLINK libspdk_event_keyring.so 00:04:31.538 SYMLINK libspdk_event_iobuf.so 00:04:31.538 SYMLINK libspdk_event_vmd.so 00:04:31.538 SYMLINK libspdk_event_vfu_tgt.so 00:04:32.110 CC module/event/subsystems/accel/accel.o 00:04:32.110 LIB libspdk_event_accel.a 00:04:32.110 SO libspdk_event_accel.so.6.0 00:04:32.370 SYMLINK libspdk_event_accel.so 00:04:32.632 CC module/event/subsystems/bdev/bdev.o 00:04:32.919 LIB libspdk_event_bdev.a 00:04:32.920 SO libspdk_event_bdev.so.6.0 00:04:32.920 SYMLINK libspdk_event_bdev.so 00:04:33.181 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:33.181 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:33.181 CC module/event/subsystems/nbd/nbd.o 00:04:33.181 CC module/event/subsystems/scsi/scsi.o 00:04:33.181 CC module/event/subsystems/ublk/ublk.o 00:04:33.443 LIB libspdk_event_nbd.a 00:04:33.443 LIB libspdk_event_ublk.a 00:04:33.443 LIB libspdk_event_scsi.a 00:04:33.443 SO libspdk_event_nbd.so.6.0 00:04:33.443 SO libspdk_event_ublk.so.3.0 00:04:33.443 SO libspdk_event_scsi.so.6.0 00:04:33.443 LIB libspdk_event_nvmf.a 00:04:33.443 SYMLINK libspdk_event_nbd.so 00:04:33.443 SYMLINK libspdk_event_ublk.so 00:04:33.443 SYMLINK libspdk_event_scsi.so 00:04:33.443 SO libspdk_event_nvmf.so.6.0 00:04:33.704 SYMLINK libspdk_event_nvmf.so 00:04:34.020 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:34.020 CC module/event/subsystems/iscsi/iscsi.o 00:04:34.020 LIB libspdk_event_vhost_scsi.a 00:04:34.020 LIB libspdk_event_iscsi.a 00:04:34.282 SO libspdk_event_vhost_scsi.so.3.0 00:04:34.282 SO libspdk_event_iscsi.so.6.0 00:04:34.282 SYMLINK libspdk_event_vhost_scsi.so 00:04:34.282 SYMLINK libspdk_event_iscsi.so 00:04:34.282 SO libspdk.so.6.0 00:04:34.282 SYMLINK libspdk.so 00:04:34.860 CC app/trace_record/trace_record.o 00:04:34.861 CXX app/trace/trace.o 00:04:34.861 TEST_HEADER include/spdk/accel.h 00:04:34.861 TEST_HEADER include/spdk/accel_module.h 00:04:34.861 TEST_HEADER include/spdk/assert.h 00:04:34.861 TEST_HEADER include/spdk/barrier.h 00:04:34.861 TEST_HEADER include/spdk/base64.h 00:04:34.861 TEST_HEADER include/spdk/bdev.h 00:04:34.861 CC app/spdk_top/spdk_top.o 00:04:34.861 TEST_HEADER include/spdk/bdev_module.h 00:04:34.861 TEST_HEADER include/spdk/bdev_zone.h 00:04:34.861 TEST_HEADER include/spdk/bit_array.h 00:04:34.861 TEST_HEADER include/spdk/bit_pool.h 00:04:34.861 CC test/rpc_client/rpc_client_test.o 00:04:34.861 TEST_HEADER include/spdk/blob_bdev.h 00:04:34.861 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:34.861 CC app/spdk_lspci/spdk_lspci.o 00:04:34.861 TEST_HEADER include/spdk/blobfs.h 00:04:34.861 TEST_HEADER include/spdk/blob.h 00:04:34.861 CC app/spdk_nvme_perf/perf.o 00:04:34.861 CC app/spdk_nvme_identify/identify.o 00:04:34.861 TEST_HEADER include/spdk/conf.h 00:04:34.861 TEST_HEADER include/spdk/config.h 00:04:34.861 CC app/spdk_nvme_discover/discovery_aer.o 00:04:34.861 TEST_HEADER include/spdk/crc16.h 00:04:34.861 TEST_HEADER include/spdk/cpuset.h 00:04:34.861 TEST_HEADER include/spdk/crc32.h 00:04:34.861 TEST_HEADER include/spdk/crc64.h 00:04:34.861 TEST_HEADER include/spdk/dif.h 00:04:34.861 TEST_HEADER include/spdk/dma.h 00:04:34.861 TEST_HEADER include/spdk/endian.h 00:04:34.861 TEST_HEADER include/spdk/env_dpdk.h 00:04:34.861 TEST_HEADER include/spdk/event.h 00:04:34.861 TEST_HEADER include/spdk/env.h 00:04:34.861 TEST_HEADER include/spdk/fd_group.h 00:04:34.861 TEST_HEADER include/spdk/file.h 00:04:34.861 TEST_HEADER include/spdk/fd.h 00:04:34.861 TEST_HEADER include/spdk/fsdev.h 00:04:34.861 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:34.861 TEST_HEADER include/spdk/ftl.h 00:04:34.861 TEST_HEADER include/spdk/fsdev_module.h 00:04:34.861 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:34.861 TEST_HEADER include/spdk/gpt_spec.h 00:04:34.861 TEST_HEADER include/spdk/histogram_data.h 00:04:34.861 TEST_HEADER include/spdk/hexlify.h 00:04:34.861 TEST_HEADER include/spdk/idxd.h 00:04:34.861 CC app/spdk_dd/spdk_dd.o 00:04:34.861 TEST_HEADER include/spdk/idxd_spec.h 00:04:34.861 TEST_HEADER include/spdk/init.h 00:04:34.861 TEST_HEADER include/spdk/ioat.h 00:04:34.861 TEST_HEADER include/spdk/ioat_spec.h 00:04:34.861 TEST_HEADER include/spdk/iscsi_spec.h 00:04:34.861 TEST_HEADER include/spdk/json.h 00:04:34.861 CC app/nvmf_tgt/nvmf_main.o 00:04:34.861 TEST_HEADER include/spdk/jsonrpc.h 00:04:34.861 TEST_HEADER include/spdk/keyring.h 00:04:34.861 TEST_HEADER include/spdk/keyring_module.h 00:04:34.861 TEST_HEADER include/spdk/likely.h 00:04:34.861 TEST_HEADER include/spdk/log.h 00:04:34.861 TEST_HEADER include/spdk/lvol.h 00:04:34.861 CC app/iscsi_tgt/iscsi_tgt.o 00:04:34.861 TEST_HEADER include/spdk/md5.h 00:04:34.861 TEST_HEADER include/spdk/mmio.h 00:04:34.861 TEST_HEADER include/spdk/memory.h 00:04:34.861 TEST_HEADER include/spdk/nbd.h 00:04:34.861 TEST_HEADER include/spdk/net.h 00:04:34.861 TEST_HEADER include/spdk/nvme.h 00:04:34.861 TEST_HEADER include/spdk/notify.h 00:04:34.861 TEST_HEADER include/spdk/nvme_intel.h 00:04:34.861 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:34.861 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:34.861 TEST_HEADER include/spdk/nvme_zns.h 00:04:34.861 TEST_HEADER include/spdk/nvme_spec.h 00:04:34.861 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:34.861 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:34.861 TEST_HEADER include/spdk/nvmf.h 00:04:34.861 TEST_HEADER include/spdk/nvmf_transport.h 00:04:34.861 TEST_HEADER include/spdk/nvmf_spec.h 00:04:34.861 CC app/spdk_tgt/spdk_tgt.o 00:04:34.861 TEST_HEADER include/spdk/opal.h 00:04:34.861 TEST_HEADER include/spdk/pci_ids.h 00:04:34.861 TEST_HEADER include/spdk/opal_spec.h 00:04:34.861 TEST_HEADER include/spdk/pipe.h 00:04:34.861 TEST_HEADER include/spdk/queue.h 00:04:34.861 TEST_HEADER include/spdk/reduce.h 00:04:34.861 TEST_HEADER include/spdk/scheduler.h 00:04:34.861 TEST_HEADER include/spdk/rpc.h 00:04:34.861 TEST_HEADER include/spdk/scsi.h 00:04:34.861 TEST_HEADER include/spdk/scsi_spec.h 00:04:34.861 TEST_HEADER include/spdk/sock.h 00:04:34.861 TEST_HEADER include/spdk/string.h 00:04:34.861 TEST_HEADER include/spdk/stdinc.h 00:04:34.861 TEST_HEADER include/spdk/thread.h 00:04:34.861 TEST_HEADER include/spdk/trace.h 00:04:34.861 TEST_HEADER include/spdk/trace_parser.h 00:04:34.861 TEST_HEADER include/spdk/tree.h 00:04:34.861 TEST_HEADER include/spdk/ublk.h 00:04:34.861 TEST_HEADER include/spdk/util.h 00:04:34.861 TEST_HEADER include/spdk/uuid.h 00:04:34.861 TEST_HEADER include/spdk/version.h 00:04:34.861 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:34.861 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:34.861 TEST_HEADER include/spdk/vhost.h 00:04:34.861 TEST_HEADER include/spdk/vmd.h 00:04:34.861 TEST_HEADER include/spdk/zipf.h 00:04:34.861 TEST_HEADER include/spdk/xor.h 00:04:34.861 CXX test/cpp_headers/accel.o 00:04:34.861 CXX test/cpp_headers/accel_module.o 00:04:34.861 CXX test/cpp_headers/assert.o 00:04:34.861 CXX test/cpp_headers/barrier.o 00:04:34.861 CXX test/cpp_headers/base64.o 00:04:34.861 CXX test/cpp_headers/bdev_module.o 00:04:34.861 CXX test/cpp_headers/bdev.o 00:04:34.861 CXX test/cpp_headers/bdev_zone.o 00:04:34.861 CXX test/cpp_headers/bit_array.o 00:04:34.861 CXX test/cpp_headers/blob_bdev.o 00:04:34.861 CXX test/cpp_headers/bit_pool.o 00:04:34.861 CXX test/cpp_headers/blobfs_bdev.o 00:04:34.861 CXX test/cpp_headers/blob.o 00:04:34.861 CXX test/cpp_headers/blobfs.o 00:04:34.861 CXX test/cpp_headers/conf.o 00:04:34.861 CXX test/cpp_headers/config.o 00:04:34.861 CXX test/cpp_headers/crc16.o 00:04:34.861 CXX test/cpp_headers/cpuset.o 00:04:34.861 CXX test/cpp_headers/crc32.o 00:04:34.861 CXX test/cpp_headers/dif.o 00:04:34.861 CXX test/cpp_headers/crc64.o 00:04:34.861 CXX test/cpp_headers/dma.o 00:04:34.861 CXX test/cpp_headers/env_dpdk.o 00:04:34.861 CXX test/cpp_headers/endian.o 00:04:34.861 CXX test/cpp_headers/env.o 00:04:34.861 CXX test/cpp_headers/event.o 00:04:34.861 CXX test/cpp_headers/fd_group.o 00:04:35.132 CXX test/cpp_headers/fd.o 00:04:35.132 CXX test/cpp_headers/file.o 00:04:35.132 CXX test/cpp_headers/fsdev.o 00:04:35.132 CXX test/cpp_headers/fsdev_module.o 00:04:35.132 CXX test/cpp_headers/ftl.o 00:04:35.132 CXX test/cpp_headers/fuse_dispatcher.o 00:04:35.132 CXX test/cpp_headers/gpt_spec.o 00:04:35.132 CXX test/cpp_headers/hexlify.o 00:04:35.132 CXX test/cpp_headers/histogram_data.o 00:04:35.132 CXX test/cpp_headers/idxd_spec.o 00:04:35.132 CXX test/cpp_headers/init.o 00:04:35.132 CXX test/cpp_headers/idxd.o 00:04:35.132 CXX test/cpp_headers/ioat.o 00:04:35.132 CXX test/cpp_headers/ioat_spec.o 00:04:35.132 CXX test/cpp_headers/iscsi_spec.o 00:04:35.132 CXX test/cpp_headers/json.o 00:04:35.132 CXX test/cpp_headers/jsonrpc.o 00:04:35.132 CXX test/cpp_headers/likely.o 00:04:35.132 CXX test/cpp_headers/keyring.o 00:04:35.132 CXX test/cpp_headers/keyring_module.o 00:04:35.132 CXX test/cpp_headers/log.o 00:04:35.132 CXX test/cpp_headers/lvol.o 00:04:35.132 CXX test/cpp_headers/memory.o 00:04:35.132 CXX test/cpp_headers/mmio.o 00:04:35.132 CXX test/cpp_headers/md5.o 00:04:35.132 CXX test/cpp_headers/nbd.o 00:04:35.132 CXX test/cpp_headers/net.o 00:04:35.132 CXX test/cpp_headers/nvme.o 00:04:35.132 CXX test/cpp_headers/notify.o 00:04:35.132 CXX test/cpp_headers/nvme_intel.o 00:04:35.132 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:35.132 CXX test/cpp_headers/nvme_ocssd.o 00:04:35.132 CC examples/ioat/perf/perf.o 00:04:35.132 CXX test/cpp_headers/nvme_spec.o 00:04:35.132 CXX test/cpp_headers/nvme_zns.o 00:04:35.132 CXX test/cpp_headers/nvmf_spec.o 00:04:35.132 CXX test/cpp_headers/nvmf_cmd.o 00:04:35.132 CXX test/cpp_headers/nvmf.o 00:04:35.132 CC examples/util/zipf/zipf.o 00:04:35.132 CC examples/ioat/verify/verify.o 00:04:35.132 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:35.132 CXX test/cpp_headers/opal_spec.o 00:04:35.132 CXX test/cpp_headers/nvmf_transport.o 00:04:35.132 CXX test/cpp_headers/opal.o 00:04:35.132 CC test/app/jsoncat/jsoncat.o 00:04:35.132 CXX test/cpp_headers/pci_ids.o 00:04:35.132 CXX test/cpp_headers/pipe.o 00:04:35.132 CC test/env/memory/memory_ut.o 00:04:35.132 CXX test/cpp_headers/reduce.o 00:04:35.132 CXX test/cpp_headers/queue.o 00:04:35.132 CXX test/cpp_headers/rpc.o 00:04:35.132 CXX test/cpp_headers/scheduler.o 00:04:35.132 CC test/thread/poller_perf/poller_perf.o 00:04:35.132 CXX test/cpp_headers/scsi.o 00:04:35.132 CXX test/cpp_headers/scsi_spec.o 00:04:35.132 CXX test/cpp_headers/sock.o 00:04:35.132 CXX test/cpp_headers/stdinc.o 00:04:35.132 CXX test/cpp_headers/trace.o 00:04:35.132 CC test/env/pci/pci_ut.o 00:04:35.132 CXX test/cpp_headers/string.o 00:04:35.132 CXX test/cpp_headers/thread.o 00:04:35.132 CXX test/cpp_headers/tree.o 00:04:35.132 CC test/app/stub/stub.o 00:04:35.132 CXX test/cpp_headers/trace_parser.o 00:04:35.132 CXX test/cpp_headers/ublk.o 00:04:35.132 CC test/app/histogram_perf/histogram_perf.o 00:04:35.132 CXX test/cpp_headers/util.o 00:04:35.132 CC test/env/vtophys/vtophys.o 00:04:35.132 CXX test/cpp_headers/uuid.o 00:04:35.132 CC app/fio/nvme/fio_plugin.o 00:04:35.132 CXX test/cpp_headers/vfio_user_pci.o 00:04:35.132 CXX test/cpp_headers/version.o 00:04:35.132 CXX test/cpp_headers/vhost.o 00:04:35.132 CXX test/cpp_headers/vfio_user_spec.o 00:04:35.132 CXX test/cpp_headers/vmd.o 00:04:35.132 CXX test/cpp_headers/zipf.o 00:04:35.132 CXX test/cpp_headers/xor.o 00:04:35.132 LINK spdk_lspci 00:04:35.132 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:35.132 CC test/dma/test_dma/test_dma.o 00:04:35.132 CC test/app/bdev_svc/bdev_svc.o 00:04:35.132 LINK rpc_client_test 00:04:35.132 CC app/fio/bdev/fio_plugin.o 00:04:35.401 LINK interrupt_tgt 00:04:35.401 LINK spdk_nvme_discover 00:04:35.401 LINK nvmf_tgt 00:04:35.670 LINK spdk_trace_record 00:04:35.670 LINK iscsi_tgt 00:04:35.933 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:35.933 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:35.933 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:35.933 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:35.933 CC test/env/mem_callbacks/mem_callbacks.o 00:04:35.933 LINK spdk_dd 00:04:35.933 LINK jsoncat 00:04:35.933 LINK zipf 00:04:35.933 LINK spdk_tgt 00:04:35.933 LINK poller_perf 00:04:36.194 LINK stub 00:04:36.194 LINK histogram_perf 00:04:36.194 LINK vtophys 00:04:36.194 LINK bdev_svc 00:04:36.194 LINK env_dpdk_post_init 00:04:36.195 LINK ioat_perf 00:04:36.195 LINK verify 00:04:36.455 LINK spdk_top 00:04:36.455 LINK spdk_trace 00:04:36.455 LINK vhost_fuzz 00:04:36.455 LINK pci_ut 00:04:36.455 LINK nvme_fuzz 00:04:36.455 LINK spdk_nvme 00:04:36.455 LINK test_dma 00:04:36.715 LINK spdk_nvme_perf 00:04:36.715 CC test/event/event_perf/event_perf.o 00:04:36.715 CC test/event/reactor_perf/reactor_perf.o 00:04:36.715 CC test/event/reactor/reactor.o 00:04:36.715 LINK spdk_nvme_identify 00:04:36.715 CC examples/vmd/lsvmd/lsvmd.o 00:04:36.715 LINK spdk_bdev 00:04:36.715 CC examples/vmd/led/led.o 00:04:36.715 CC examples/sock/hello_world/hello_sock.o 00:04:36.715 CC examples/idxd/perf/perf.o 00:04:36.715 CC test/event/app_repeat/app_repeat.o 00:04:36.715 LINK mem_callbacks 00:04:36.715 CC test/event/scheduler/scheduler.o 00:04:36.715 CC examples/thread/thread/thread_ex.o 00:04:36.715 LINK reactor 00:04:36.715 LINK event_perf 00:04:36.715 LINK reactor_perf 00:04:36.715 LINK lsvmd 00:04:36.715 CC app/vhost/vhost.o 00:04:36.715 LINK led 00:04:36.976 LINK app_repeat 00:04:36.976 LINK hello_sock 00:04:36.976 LINK scheduler 00:04:36.976 LINK thread 00:04:36.976 LINK idxd_perf 00:04:36.976 LINK vhost 00:04:37.238 CC test/nvme/sgl/sgl.o 00:04:37.238 CC test/nvme/aer/aer.o 00:04:37.238 CC test/nvme/connect_stress/connect_stress.o 00:04:37.238 CC test/nvme/startup/startup.o 00:04:37.238 CC test/nvme/overhead/overhead.o 00:04:37.238 CC test/nvme/fused_ordering/fused_ordering.o 00:04:37.238 CC test/nvme/e2edp/nvme_dp.o 00:04:37.238 CC test/nvme/compliance/nvme_compliance.o 00:04:37.238 CC test/nvme/reset/reset.o 00:04:37.238 CC test/nvme/fdp/fdp.o 00:04:37.238 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:37.238 CC test/nvme/err_injection/err_injection.o 00:04:37.238 CC test/nvme/boot_partition/boot_partition.o 00:04:37.238 CC test/nvme/simple_copy/simple_copy.o 00:04:37.238 CC test/nvme/reserve/reserve.o 00:04:37.238 CC test/nvme/cuse/cuse.o 00:04:37.238 CC test/accel/dif/dif.o 00:04:37.238 CC test/blobfs/mkfs/mkfs.o 00:04:37.238 LINK memory_ut 00:04:37.238 CC test/lvol/esnap/esnap.o 00:04:37.499 LINK startup 00:04:37.499 LINK connect_stress 00:04:37.499 LINK boot_partition 00:04:37.499 LINK fused_ordering 00:04:37.499 LINK err_injection 00:04:37.499 LINK doorbell_aers 00:04:37.499 LINK reserve 00:04:37.499 LINK sgl 00:04:37.499 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:37.499 CC examples/nvme/arbitration/arbitration.o 00:04:37.499 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:37.499 LINK mkfs 00:04:37.499 CC examples/nvme/abort/abort.o 00:04:37.499 LINK simple_copy 00:04:37.499 CC examples/nvme/hello_world/hello_world.o 00:04:37.499 CC examples/nvme/reconnect/reconnect.o 00:04:37.499 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:37.499 CC examples/nvme/hotplug/hotplug.o 00:04:37.499 LINK aer 00:04:37.499 LINK nvme_dp 00:04:37.499 LINK overhead 00:04:37.499 LINK reset 00:04:37.499 LINK nvme_compliance 00:04:37.499 LINK fdp 00:04:37.499 LINK iscsi_fuzz 00:04:37.760 CC examples/accel/perf/accel_perf.o 00:04:37.760 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:37.760 CC examples/blob/hello_world/hello_blob.o 00:04:37.760 CC examples/blob/cli/blobcli.o 00:04:37.760 LINK cmb_copy 00:04:37.760 LINK pmr_persistence 00:04:37.760 LINK hello_world 00:04:37.760 LINK hotplug 00:04:37.760 LINK arbitration 00:04:37.760 LINK abort 00:04:37.760 LINK reconnect 00:04:37.760 LINK dif 00:04:38.022 LINK nvme_manage 00:04:38.022 LINK hello_blob 00:04:38.022 LINK hello_fsdev 00:04:38.022 LINK accel_perf 00:04:38.283 LINK blobcli 00:04:38.544 LINK cuse 00:04:38.544 CC test/bdev/bdevio/bdevio.o 00:04:38.806 CC examples/bdev/hello_world/hello_bdev.o 00:04:38.806 CC examples/bdev/bdevperf/bdevperf.o 00:04:38.806 LINK bdevio 00:04:39.067 LINK hello_bdev 00:04:39.329 LINK bdevperf 00:04:39.902 CC examples/nvmf/nvmf/nvmf.o 00:04:40.474 LINK nvmf 00:04:41.863 LINK esnap 00:04:42.124 00:04:42.124 real 0m56.187s 00:04:42.124 user 8m9.879s 00:04:42.124 sys 5m31.019s 00:04:42.124 09:25:41 make -- common/autotest_common.sh@1129 -- $ xtrace_disable 00:04:42.124 09:25:41 make -- common/autotest_common.sh@10 -- $ set +x 00:04:42.124 ************************************ 00:04:42.124 END TEST make 00:04:42.124 ************************************ 00:04:42.124 09:25:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:42.124 09:25:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:42.124 09:25:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:42.124 09:25:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.124 09:25:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:42.124 09:25:41 -- pm/common@44 -- $ pid=3035614 00:04:42.124 09:25:41 -- pm/common@50 -- $ kill -TERM 3035614 00:04:42.124 09:25:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.124 09:25:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:42.124 09:25:41 -- pm/common@44 -- $ pid=3035615 00:04:42.124 09:25:41 -- pm/common@50 -- $ kill -TERM 3035615 00:04:42.124 09:25:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.124 09:25:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:42.124 09:25:41 -- pm/common@44 -- $ pid=3035617 00:04:42.124 09:25:41 -- pm/common@50 -- $ kill -TERM 3035617 00:04:42.124 09:25:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.124 09:25:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:42.124 09:25:41 -- pm/common@44 -- $ pid=3035640 00:04:42.124 09:25:41 -- pm/common@50 -- $ sudo -E kill -TERM 3035640 00:04:42.386 09:25:41 -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:04:42.386 09:25:41 -- common/autotest_common.sh@1626 -- # lcov --version 00:04:42.386 09:25:41 -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:04:42.386 09:25:42 -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:04:42.386 09:25:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.386 09:25:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.386 09:25:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.386 09:25:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.386 09:25:42 -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.386 09:25:42 -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.386 09:25:42 -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.386 09:25:42 -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.386 09:25:42 -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.386 09:25:42 -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.386 09:25:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.386 09:25:42 -- scripts/common.sh@344 -- # case "$op" in 00:04:42.386 09:25:42 -- scripts/common.sh@345 -- # : 1 00:04:42.386 09:25:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.386 09:25:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.386 09:25:42 -- scripts/common.sh@365 -- # decimal 1 00:04:42.387 09:25:42 -- scripts/common.sh@353 -- # local d=1 00:04:42.387 09:25:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.387 09:25:42 -- scripts/common.sh@355 -- # echo 1 00:04:42.387 09:25:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.387 09:25:42 -- scripts/common.sh@366 -- # decimal 2 00:04:42.387 09:25:42 -- scripts/common.sh@353 -- # local d=2 00:04:42.387 09:25:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.387 09:25:42 -- scripts/common.sh@355 -- # echo 2 00:04:42.387 09:25:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.387 09:25:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.387 09:25:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.387 09:25:42 -- scripts/common.sh@368 -- # return 0 00:04:42.387 09:25:42 -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.387 09:25:42 -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:04:42.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.387 --rc genhtml_branch_coverage=1 00:04:42.387 --rc genhtml_function_coverage=1 00:04:42.387 --rc genhtml_legend=1 00:04:42.387 --rc geninfo_all_blocks=1 00:04:42.387 --rc geninfo_unexecuted_blocks=1 00:04:42.387 00:04:42.387 ' 00:04:42.387 09:25:42 -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:04:42.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.387 --rc genhtml_branch_coverage=1 00:04:42.387 --rc genhtml_function_coverage=1 00:04:42.387 --rc genhtml_legend=1 00:04:42.387 --rc geninfo_all_blocks=1 00:04:42.387 --rc geninfo_unexecuted_blocks=1 00:04:42.387 00:04:42.387 ' 00:04:42.387 09:25:42 -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:04:42.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.387 --rc genhtml_branch_coverage=1 00:04:42.387 --rc genhtml_function_coverage=1 00:04:42.387 --rc genhtml_legend=1 00:04:42.387 --rc geninfo_all_blocks=1 00:04:42.387 --rc geninfo_unexecuted_blocks=1 00:04:42.387 00:04:42.387 ' 00:04:42.387 09:25:42 -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:04:42.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.387 --rc genhtml_branch_coverage=1 00:04:42.387 --rc genhtml_function_coverage=1 00:04:42.387 --rc genhtml_legend=1 00:04:42.387 --rc geninfo_all_blocks=1 00:04:42.387 --rc geninfo_unexecuted_blocks=1 00:04:42.387 00:04:42.387 ' 00:04:42.387 09:25:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.387 09:25:42 -- nvmf/common.sh@7 -- # uname -s 00:04:42.387 09:25:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.387 09:25:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.387 09:25:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.387 09:25:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.387 09:25:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.387 09:25:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.387 09:25:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.387 09:25:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.387 09:25:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.387 09:25:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.649 09:25:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:42.649 09:25:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:42.649 09:25:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.649 09:25:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.649 09:25:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:42.649 09:25:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.649 09:25:42 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.649 09:25:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.649 09:25:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.649 09:25:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.649 09:25:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.649 09:25:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.649 09:25:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.649 09:25:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.649 09:25:42 -- paths/export.sh@5 -- # export PATH 00:04:42.649 09:25:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.649 09:25:42 -- nvmf/common.sh@51 -- # : 0 00:04:42.649 09:25:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.649 09:25:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.649 09:25:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.649 09:25:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.649 09:25:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.649 09:25:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.649 09:25:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.649 09:25:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.649 09:25:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.649 09:25:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:42.649 09:25:42 -- spdk/autotest.sh@32 -- # uname -s 00:04:42.649 09:25:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:42.649 09:25:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:42.649 09:25:42 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:42.649 09:25:42 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:42.649 09:25:42 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:42.649 09:25:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:42.649 09:25:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:42.649 09:25:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:42.649 09:25:42 -- spdk/autotest.sh@48 -- # udevadm_pid=3101207 00:04:42.649 09:25:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:42.649 09:25:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:42.649 09:25:42 -- pm/common@17 -- # local monitor 00:04:42.649 09:25:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.649 09:25:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.649 09:25:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.649 09:25:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.649 09:25:42 -- pm/common@21 -- # date +%s 00:04:42.649 09:25:42 -- pm/common@21 -- # date +%s 00:04:42.649 09:25:42 -- pm/common@25 -- # sleep 1 00:04:42.649 09:25:42 -- pm/common@21 -- # date +%s 00:04:42.649 09:25:42 -- pm/common@21 -- # date +%s 00:04:42.649 09:25:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285942 00:04:42.649 09:25:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285942 00:04:42.649 09:25:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285942 00:04:42.649 09:25:42 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285942 00:04:42.649 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285942_collect-cpu-load.pm.log 00:04:42.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285942_collect-vmstat.pm.log 00:04:42.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285942_collect-cpu-temp.pm.log 00:04:42.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285942_collect-bmc-pm.bmc.pm.log 00:04:43.594 09:25:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:43.594 09:25:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:43.594 09:25:43 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:43.594 09:25:43 -- common/autotest_common.sh@10 -- # set +x 00:04:43.594 09:25:43 -- spdk/autotest.sh@59 -- # create_test_list 00:04:43.594 09:25:43 -- common/autotest_common.sh@751 -- # xtrace_disable 00:04:43.594 09:25:43 -- common/autotest_common.sh@10 -- # set +x 00:04:43.594 09:25:43 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:43.594 09:25:43 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.594 09:25:43 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.594 09:25:43 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:43.594 09:25:43 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.594 09:25:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:43.594 09:25:43 -- common/autotest_common.sh@1443 -- # uname 00:04:43.594 09:25:43 -- common/autotest_common.sh@1443 -- # '[' Linux = FreeBSD ']' 00:04:43.594 09:25:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:43.594 09:25:43 -- common/autotest_common.sh@1463 -- # uname 00:04:43.594 09:25:43 -- common/autotest_common.sh@1463 -- # [[ Linux = FreeBSD ]] 00:04:43.594 09:25:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:43.594 09:25:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:43.594 lcov: LCOV version 1.15 00:04:43.855 09:25:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:58.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:58.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:13.680 09:26:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:13.680 09:26:13 -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:13.680 09:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:13.680 09:26:13 -- spdk/autotest.sh@78 -- # rm -f 00:05:13.680 09:26:13 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.889 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:65:00.0 (144d a80a): Already using the nvme driver 00:05:17.889 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:05:17.889 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:05:17.889 09:26:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:17.889 09:26:17 -- common/autotest_common.sh@1600 -- # zoned_devs=() 00:05:17.889 09:26:17 -- common/autotest_common.sh@1600 -- # local -gA zoned_devs 00:05:17.889 09:26:17 -- common/autotest_common.sh@1601 -- # local nvme bdf 00:05:17.889 09:26:17 -- common/autotest_common.sh@1603 -- # for nvme in /sys/block/nvme* 00:05:17.889 09:26:17 -- common/autotest_common.sh@1604 -- # is_block_zoned nvme0n1 00:05:17.889 09:26:17 -- common/autotest_common.sh@1593 -- # local device=nvme0n1 00:05:17.889 09:26:17 -- common/autotest_common.sh@1595 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:17.889 09:26:17 -- common/autotest_common.sh@1596 -- # [[ none != none ]] 00:05:17.889 09:26:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:17.890 09:26:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.890 09:26:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.890 09:26:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:17.890 09:26:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:17.890 09:26:17 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:17.890 No valid GPT data, bailing 00:05:17.890 09:26:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:17.890 09:26:17 -- scripts/common.sh@394 -- # pt= 00:05:17.890 09:26:17 -- scripts/common.sh@395 -- # return 1 00:05:17.890 09:26:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:17.890 1+0 records in 00:05:17.890 1+0 records out 00:05:17.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004353 s, 241 MB/s 00:05:17.890 09:26:17 -- spdk/autotest.sh@105 -- # sync 00:05:17.890 09:26:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:17.890 09:26:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:17.890 09:26:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:27.926 09:26:26 -- spdk/autotest.sh@111 -- # uname -s 00:05:27.926 09:26:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:27.926 09:26:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:27.926 09:26:26 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:30.481 Hugepages 00:05:30.481 node hugesize free / total 00:05:30.481 node0 1048576kB 0 / 0 00:05:30.481 node0 2048kB 0 / 0 00:05:30.481 node1 1048576kB 0 / 0 00:05:30.481 node1 2048kB 0 / 0 00:05:30.481 00:05:30.481 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:30.481 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:30.481 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:30.481 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:30.481 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:30.481 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:30.481 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:30.481 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:30.481 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:30.481 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:30.481 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:30.481 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:30.481 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:30.481 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:30.481 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:30.481 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:30.481 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:30.481 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:30.481 09:26:29 -- spdk/autotest.sh@117 -- # uname -s 00:05:30.481 09:26:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:30.481 09:26:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:30.481 09:26:29 -- nvme/functions.sh@217 -- # scan_nvme_ctrls 00:05:30.481 09:26:29 -- nvme/functions.sh@47 -- # local ctrl ctrl_dev reg val ns pci 00:05:30.481 09:26:29 -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:05:30.481 09:26:29 -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@51 -- # pci=0000:65:00.0 00:05:30.481 09:26:29 -- nvme/functions.sh@52 -- # pci_can_use 0000:65:00.0 00:05:30.481 09:26:29 -- scripts/common.sh@18 -- # local i 00:05:30.481 09:26:29 -- scripts/common.sh@21 -- # [[ =~ 0000:65:00.0 ]] 00:05:30.481 09:26:29 -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:30.481 09:26:29 -- scripts/common.sh@27 -- # return 0 00:05:30.481 09:26:29 -- nvme/functions.sh@53 -- # ctrl_dev=nvme0 00:05:30.481 09:26:29 -- nvme/functions.sh@54 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:05:30.481 09:26:29 -- nvme/functions.sh@19 -- # local ref=nvme0 reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@20 -- # shift 00:05:30.481 09:26:29 -- nvme/functions.sh@22 -- # local -gA 'nvme0=()' 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@18 -- # nvme id-ctrl /dev/nvme0 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x144d ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[vid]="0x144d"' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[vid]=0x144d 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x144d ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[ssvid]="0x144d"' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[ssvid]=0x144d 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n S64GNE0R605499 ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[sn]="S64GNE0R605499 "' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[sn]='S64GNE0R605499 ' 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n SAMSUNG MZQL21T9HCJR-00A07 ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[mn]="SAMSUNG MZQL21T9HCJR-00A07 "' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[mn]='SAMSUNG MZQL21T9HCJR-00A07 ' 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n GDC5302Q ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[fr]="GDC5302Q"' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[fr]=GDC5302Q 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 2 ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[rab]="2"' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[rab]=2 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 002538 ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[ieee]="002538"' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[ieee]=002538 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[cmic]="0"' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[cmic]=0 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 9 ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[mdts]="9"' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[mdts]=9 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x6 ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[cntlid]="0x6"' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[cntlid]=0x6 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[ver]="0x10400"' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[ver]=0x10400 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.481 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.481 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x7a1200 ]] 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3r]="0x7a1200"' 00:05:30.481 09:26:29 -- nvme/functions.sh@25 -- # nvme0[rtd3r]=0x7a1200 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x7a1200 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3e]="0x7a1200"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[rtd3e]=0x7a1200 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x300 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[oaes]="0x300"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[oaes]=0x300 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x80 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[ctratt]="0x80"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[ctratt]=0x80 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[rrls]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[rrls]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[cntrltype]="1"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[cntrltype]=1 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt1]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[crdt1]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt2]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[crdt2]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt3]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[crdt3]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[nvmsr]="1"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[nvmsr]=1 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[vwci]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[vwci]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[mec]="1"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[mec]=1 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x5f ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[oacs]="0x5f"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[oacs]=0x5f 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[acl]="7"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[acl]=7 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[aerl]="3"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[aerl]=3 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x17 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[frmw]="0x17"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[frmw]=0x17 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0xe ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[lpa]="0xe"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[lpa]=0xe 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 63 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[elpe]="63"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[elpe]=63 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[npss]="1"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[npss]=1 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[avscc]="0x1"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[avscc]=0x1 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[apsta]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[apsta]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 353 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[wctemp]="353"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[wctemp]=353 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 356 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[cctemp]="356"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[cctemp]=356 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[mtfa]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[mtfa]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[hmpre]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[hmpre]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[hmmin]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[hmmin]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1920383410176 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[tnvmcap]="1920383410176"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[tnvmcap]=1920383410176 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[unvmcap]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[unvmcap]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[rpmbs]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[rpmbs]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 35 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[edstt]="35"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[edstt]=35 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[dsto]="1"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[dsto]=1 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[fwug]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[fwug]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[kas]="0"' 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # nvme0[kas]=0 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.482 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.482 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.482 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[hctma]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[hctma]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[mntmt]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[mntmt]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[mxtmt]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[mxtmt]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[sanicap]="0x3"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[sanicap]=0x3 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[hmminds]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[hmminds]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[hmmaxd]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[hmmaxd]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[nsetidmax]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[nsetidmax]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[endgidmax]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[endgidmax]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[anatt]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[anatt]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[anacap]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[anacap]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[anagrpmax]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[anagrpmax]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[nanagrpid]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[nanagrpid]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[pels]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[pels]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[domainid]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[domainid]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[megcap]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[megcap]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[sqes]="0x66"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[sqes]=0x66 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[cqes]="0x44"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[cqes]=0x44 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[maxcmd]="256"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[maxcmd]=256 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 32 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[nn]="32"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[nn]=32 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x5f ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[oncs]="0x5f"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[oncs]=0x5f 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[fuses]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[fuses]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[fna]="0x4"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[fna]=0x4 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x6 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[vwc]="0x6"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[vwc]=0x6 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1023 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[awun]="1023"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[awun]=1023 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[awupf]="7"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[awupf]=7 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[icsvscc]="1"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[icsvscc]=1 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[nwpc]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[nwpc]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[acwu]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[acwu]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[ocfs]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[ocfs]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[sgls]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[sgls]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[mnan]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[mnan]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[maxdna]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[maxdna]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[maxcna]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[maxcna]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[oaqd]="0"' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[oaqd]=0 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.483 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.483 09:26:29 -- nvme/functions.sh@24 -- # [[ -n nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605499 ]] 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[subnqn]="nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605499 "' 00:05:30.483 09:26:29 -- nvme/functions.sh@25 -- # nvme0[subnqn]='nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605499 ' 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[ioccsz]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[ioccsz]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[iorcsz]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[iorcsz]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[icdoff]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[icdoff]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[fcatt]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[fcatt]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[msdbd]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[msdbd]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[ofcs]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[ofcs]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:70 exlat:70 rrt:0 rrl:0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:70 exlat:70 rrt:0 rrl:0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[ps0]='mp:25.00W operational enlat:70 exlat:70 rrt:0 rrl:0' 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:4.00W active_power:14.00W ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:4.00W active_power:14.00W"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[rwt]='0 rwl:0 idle_power:4.00W active_power:14.00W' 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 80K 128KiB SW ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[active_power_workload]="80K 128KiB SW"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[active_power_workload]='80K 128KiB SW' 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n mp:8.00W operational enlat:70 exlat:70 rrt:1 rrl:1 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[ps1]="mp:8.00W operational enlat:70 exlat:70 rrt:1 rrl:1"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[ps1]='mp:8.00W operational enlat:70 exlat:70 rrt:1 rrl:1' 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1 rwl:1 idle_power:4.00W active_power:8.00W ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[rwt]="1 rwl:1 idle_power:4.00W active_power:8.00W"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[rwt]='1 rwl:1 idle_power:4.00W active_power:8.00W' 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1MiB 32 RW, 30s idle ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0[active_power_workload]="1MiB 32 RW, 30s idle"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0[active_power_workload]='1MiB 32 RW, 30s idle' 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme0_ns 00:05:30.484 09:26:29 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:05:30.484 09:26:29 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@58 -- # ns_dev=nvme0n1 00:05:30.484 09:26:29 -- nvme/functions.sh@59 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:05:30.484 09:26:29 -- nvme/functions.sh@19 -- # local ref=nvme0n1 reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@20 -- # shift 00:05:30.484 09:26:29 -- nvme/functions.sh@22 -- # local -gA 'nvme0n1=()' 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme0n1 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0xdf8fe2b0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsze]="0xdf8fe2b0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nsze]=0xdf8fe2b0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0xdf8fe2b0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[ncap]="0xdf8fe2b0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[ncap]=0xdf8fe2b0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x14badb0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nuse]="0x14badb0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nuse]=0x14badb0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x1a ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsfeat]="0x1a"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nsfeat]=0x1a 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nlbaf]="1"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nlbaf]=1 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[flbas]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[flbas]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mc]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[mc]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dpc]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[dpc]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dps]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[dps]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nmic]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nmic]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[rescap]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[rescap]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0x80 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[fpi]="0x80"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[fpi]=0x80 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 9 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dlfeat]="9"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[dlfeat]=9 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1023 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawun]="1023"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nawun]=1023 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawupf]="7"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nawupf]=7 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nacwu]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nacwu]=0 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1023 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabsn]="1023"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nabsn]=1023 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.484 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.484 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabo]="0"' 00:05:30.484 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nabo]=0 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabspf]="7"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nabspf]=7 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[noiob]="0"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[noiob]=0 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 1920383410176 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmcap]="1920383410176"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nvmcap]=1920383410176 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 255 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwg]="255"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[npwg]=255 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwa]="7"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[npwa]=7 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 255 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npdg]="255"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[npdg]=255 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npda]="7"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[npda]=7 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 255 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nows]="255"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nows]=255 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mssrl]="0"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[mssrl]=0 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mcl]="0"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[mcl]=0 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[msrc]="0"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[msrc]=0 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nulbaf]="0"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nulbaf]=0 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[anagrpid]="0"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[anagrpid]=0 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsattr]="0"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nsattr]=0 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmsetid]="0"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nvmsetid]=0 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[endgid]="0"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[endgid]=0 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 363447305260549900253845000000a3 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nguid]="363447305260549900253845000000a3"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[nguid]=363447305260549900253845000000a3 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[eui64]=0000000000000000 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 (in use) ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 (in use)"' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 (in use)' 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:05:30.485 09:26:29 -- nvme/functions.sh@25 -- # nvme0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # IFS=: 00:05:30.485 09:26:29 -- nvme/functions.sh@23 -- # read -r reg val 00:05:30.485 09:26:29 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:05:30.485 09:26:29 -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme0 00:05:30.485 09:26:29 -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme0_ns 00:05:30.485 09:26:29 -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:65:00.0 00:05:30.485 09:26:29 -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme0 00:05:30.485 09:26:29 -- nvme/functions.sh@67 -- # (( 1 > 0 )) 00:05:30.485 09:26:29 -- nvme/functions.sh@219 -- # local _ctrls ctrl 00:05:30.485 09:26:29 -- nvme/functions.sh@220 -- # local unvmcap tnvmcap cntlid size blksize=512 00:05:30.485 09:26:29 -- nvme/functions.sh@222 -- # _ctrls=($(get_nvme_with_ns_management)) 00:05:30.485 09:26:29 -- nvme/functions.sh@222 -- # get_nvme_with_ns_management 00:05:30.485 09:26:29 -- nvme/functions.sh@157 -- # local _ctrls 00:05:30.485 09:26:29 -- nvme/functions.sh@159 -- # _ctrls=($(get_nvmes_with_ns_management)) 00:05:30.485 09:26:29 -- nvme/functions.sh@159 -- # get_nvmes_with_ns_management 00:05:30.485 09:26:29 -- nvme/functions.sh@146 -- # (( 1 == 0 )) 00:05:30.485 09:26:29 -- nvme/functions.sh@148 -- # local ctrl 00:05:30.485 09:26:29 -- nvme/functions.sh@149 -- # for ctrl in "${!ctrls_g[@]}" 00:05:30.485 09:26:29 -- nvme/functions.sh@150 -- # get_oacs nvme0 nsmgt 00:05:30.485 09:26:29 -- nvme/functions.sh@123 -- # local ctrl=nvme0 bit=nsmgt 00:05:30.485 09:26:29 -- nvme/functions.sh@124 -- # local -A bits 00:05:30.485 09:26:29 -- nvme/functions.sh@127 -- # bits["ss/sr"]=1 00:05:30.485 09:26:29 -- nvme/functions.sh@128 -- # bits["fnvme"]=2 00:05:30.485 09:26:29 -- nvme/functions.sh@129 -- # bits["fc/fi"]=4 00:05:30.485 09:26:29 -- nvme/functions.sh@130 -- # bits["nsmgt"]=8 00:05:30.485 09:26:29 -- nvme/functions.sh@131 -- # bits["self-test"]=16 00:05:30.485 09:26:29 -- nvme/functions.sh@132 -- # bits["directives"]=32 00:05:30.485 09:26:29 -- nvme/functions.sh@133 -- # bits["nvme-mi-s/r"]=64 00:05:30.485 09:26:29 -- nvme/functions.sh@134 -- # bits["virtmgt"]=128 00:05:30.485 09:26:29 -- nvme/functions.sh@135 -- # bits["doorbellbuf"]=256 00:05:30.485 09:26:29 -- nvme/functions.sh@136 -- # bits["getlba"]=512 00:05:30.485 09:26:29 -- nvme/functions.sh@137 -- # bits["commfeatlock"]=1024 00:05:30.485 09:26:29 -- nvme/functions.sh@139 -- # bit=nsmgt 00:05:30.485 09:26:29 -- nvme/functions.sh@140 -- # [[ -n 8 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@142 -- # get_nvme_ctrl_feature nvme0 oacs 00:05:30.485 09:26:29 -- nvme/functions.sh@71 -- # local ctrl=nvme0 reg=oacs 00:05:30.485 09:26:29 -- nvme/functions.sh@73 -- # [[ -n nvme0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme0 00:05:30.485 09:26:29 -- nvme/functions.sh@77 -- # [[ -n 0x5f ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@78 -- # echo 0x5f 00:05:30.485 09:26:29 -- nvme/functions.sh@142 -- # (( 0x5f & bits[nsmgt] )) 00:05:30.485 09:26:29 -- nvme/functions.sh@150 -- # echo nvme0 00:05:30.485 09:26:29 -- nvme/functions.sh@153 -- # return 0 00:05:30.485 09:26:29 -- nvme/functions.sh@160 -- # (( 1 > 0 )) 00:05:30.485 09:26:29 -- nvme/functions.sh@161 -- # echo nvme0 00:05:30.485 09:26:29 -- nvme/functions.sh@162 -- # return 0 00:05:30.485 09:26:29 -- nvme/functions.sh@224 -- # for ctrl in "${_ctrls[@]}" 00:05:30.485 09:26:29 -- nvme/functions.sh@229 -- # get_nvme_ctrl_feature nvme0 unvmcap 00:05:30.485 09:26:29 -- nvme/functions.sh@71 -- # local ctrl=nvme0 reg=unvmcap 00:05:30.485 09:26:29 -- nvme/functions.sh@73 -- # [[ -n nvme0 ]] 00:05:30.485 09:26:29 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme0 00:05:30.485 09:26:29 -- nvme/functions.sh@77 -- # [[ -n 0 ]] 00:05:30.486 09:26:29 -- nvme/functions.sh@78 -- # echo 0 00:05:30.486 09:26:29 -- nvme/functions.sh@229 -- # unvmcap=0 00:05:30.486 09:26:29 -- nvme/functions.sh@230 -- # get_nvme_ctrl_feature nvme0 tnvmcap 00:05:30.486 09:26:29 -- nvme/functions.sh@71 -- # local ctrl=nvme0 reg=tnvmcap 00:05:30.486 09:26:29 -- nvme/functions.sh@73 -- # [[ -n nvme0 ]] 00:05:30.486 09:26:29 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme0 00:05:30.486 09:26:29 -- nvme/functions.sh@77 -- # [[ -n 1920383410176 ]] 00:05:30.486 09:26:29 -- nvme/functions.sh@78 -- # echo 1920383410176 00:05:30.486 09:26:29 -- nvme/functions.sh@230 -- # tnvmcap=1920383410176 00:05:30.486 09:26:29 -- nvme/functions.sh@231 -- # get_nvme_ctrl_feature nvme0 cntlid 00:05:30.486 09:26:29 -- nvme/functions.sh@71 -- # local ctrl=nvme0 reg=cntlid 00:05:30.486 09:26:29 -- nvme/functions.sh@73 -- # [[ -n nvme0 ]] 00:05:30.486 09:26:29 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme0 00:05:30.486 09:26:29 -- nvme/functions.sh@77 -- # [[ -n 0x6 ]] 00:05:30.486 09:26:29 -- nvme/functions.sh@78 -- # echo 0x6 00:05:30.486 09:26:29 -- nvme/functions.sh@231 -- # cntlid=0x6 00:05:30.486 09:26:29 -- nvme/functions.sh@232 -- # (( unvmcap == 0 )) 00:05:30.486 09:26:29 -- nvme/functions.sh@234 -- # continue 00:05:30.486 09:26:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:30.486 09:26:29 -- common/autotest_common.sh@733 -- # xtrace_disable 00:05:30.486 09:26:29 -- common/autotest_common.sh@10 -- # set +x 00:05:30.486 09:26:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:30.486 09:26:30 -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:30.486 09:26:30 -- common/autotest_common.sh@10 -- # set +x 00:05:30.486 09:26:30 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:34.697 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:34.697 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:36.081 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:36.342 09:26:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:36.342 09:26:35 -- common/autotest_common.sh@733 -- # xtrace_disable 00:05:36.342 09:26:35 -- common/autotest_common.sh@10 -- # set +x 00:05:36.342 09:26:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:36.342 09:26:35 -- common/autotest_common.sh@1519 -- # local bdfs bdf bdf_id 00:05:36.342 09:26:35 -- common/autotest_common.sh@1521 -- # mapfile -t bdfs 00:05:36.342 09:26:35 -- common/autotest_common.sh@1521 -- # get_nvme_bdfs_by_id 0x0a54 00:05:36.342 09:26:35 -- common/autotest_common.sh@1503 -- # bdfs=() 00:05:36.342 09:26:35 -- common/autotest_common.sh@1503 -- # _bdfs=() 00:05:36.342 09:26:35 -- common/autotest_common.sh@1503 -- # local bdfs _bdfs bdf 00:05:36.342 09:26:35 -- common/autotest_common.sh@1504 -- # _bdfs=($(get_nvme_bdfs)) 00:05:36.342 09:26:35 -- common/autotest_common.sh@1504 -- # get_nvme_bdfs 00:05:36.342 09:26:35 -- common/autotest_common.sh@1484 -- # bdfs=() 00:05:36.342 09:26:35 -- common/autotest_common.sh@1484 -- # local bdfs 00:05:36.342 09:26:35 -- common/autotest_common.sh@1485 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.343 09:26:35 -- common/autotest_common.sh@1485 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:36.343 09:26:35 -- common/autotest_common.sh@1485 -- # jq -r '.config[].params.traddr' 00:05:36.604 09:26:36 -- common/autotest_common.sh@1486 -- # (( 1 == 0 )) 00:05:36.604 09:26:36 -- common/autotest_common.sh@1490 -- # printf '%s\n' 0000:65:00.0 00:05:36.604 09:26:36 -- common/autotest_common.sh@1506 -- # for bdf in "${_bdfs[@]}" 00:05:36.604 09:26:36 -- common/autotest_common.sh@1507 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:36.604 09:26:36 -- common/autotest_common.sh@1507 -- # device=0xa80a 00:05:36.604 09:26:36 -- common/autotest_common.sh@1508 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:36.604 09:26:36 -- common/autotest_common.sh@1513 -- # (( 0 > 0 )) 00:05:36.604 09:26:36 -- common/autotest_common.sh@1513 -- # return 0 00:05:36.604 09:26:36 -- common/autotest_common.sh@1522 -- # [[ -z '' ]] 00:05:36.604 09:26:36 -- common/autotest_common.sh@1523 -- # return 0 00:05:36.604 09:26:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:36.604 09:26:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:36.604 09:26:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:36.604 09:26:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:36.604 09:26:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:36.604 09:26:36 -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:36.604 09:26:36 -- common/autotest_common.sh@10 -- # set +x 00:05:36.604 09:26:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:36.604 09:26:36 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:36.604 09:26:36 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:36.604 09:26:36 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:36.604 09:26:36 -- common/autotest_common.sh@10 -- # set +x 00:05:36.604 ************************************ 00:05:36.604 START TEST env 00:05:36.604 ************************************ 00:05:36.604 09:26:36 env -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:36.604 * Looking for test storage... 00:05:36.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:36.604 09:26:36 env -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:05:36.604 09:26:36 env -- common/autotest_common.sh@1626 -- # lcov --version 00:05:36.604 09:26:36 env -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:05:36.867 09:26:36 env -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:05:36.867 09:26:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.867 09:26:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.867 09:26:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.867 09:26:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.867 09:26:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.867 09:26:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.867 09:26:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.867 09:26:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.867 09:26:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.867 09:26:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.867 09:26:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.867 09:26:36 env -- scripts/common.sh@344 -- # case "$op" in 00:05:36.867 09:26:36 env -- scripts/common.sh@345 -- # : 1 00:05:36.867 09:26:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.867 09:26:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.867 09:26:36 env -- scripts/common.sh@365 -- # decimal 1 00:05:36.867 09:26:36 env -- scripts/common.sh@353 -- # local d=1 00:05:36.867 09:26:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.867 09:26:36 env -- scripts/common.sh@355 -- # echo 1 00:05:36.867 09:26:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.867 09:26:36 env -- scripts/common.sh@366 -- # decimal 2 00:05:36.867 09:26:36 env -- scripts/common.sh@353 -- # local d=2 00:05:36.867 09:26:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.867 09:26:36 env -- scripts/common.sh@355 -- # echo 2 00:05:36.867 09:26:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.867 09:26:36 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.867 09:26:36 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.867 09:26:36 env -- scripts/common.sh@368 -- # return 0 00:05:36.867 09:26:36 env -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.867 09:26:36 env -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:05:36.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.867 --rc genhtml_branch_coverage=1 00:05:36.867 --rc genhtml_function_coverage=1 00:05:36.867 --rc genhtml_legend=1 00:05:36.867 --rc geninfo_all_blocks=1 00:05:36.867 --rc geninfo_unexecuted_blocks=1 00:05:36.867 00:05:36.867 ' 00:05:36.867 09:26:36 env -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:05:36.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.867 --rc genhtml_branch_coverage=1 00:05:36.867 --rc genhtml_function_coverage=1 00:05:36.867 --rc genhtml_legend=1 00:05:36.867 --rc geninfo_all_blocks=1 00:05:36.867 --rc geninfo_unexecuted_blocks=1 00:05:36.867 00:05:36.867 ' 00:05:36.867 09:26:36 env -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:05:36.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.867 --rc genhtml_branch_coverage=1 00:05:36.867 --rc genhtml_function_coverage=1 00:05:36.867 --rc genhtml_legend=1 00:05:36.867 --rc geninfo_all_blocks=1 00:05:36.867 --rc geninfo_unexecuted_blocks=1 00:05:36.867 00:05:36.867 ' 00:05:36.867 09:26:36 env -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:05:36.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.867 --rc genhtml_branch_coverage=1 00:05:36.867 --rc genhtml_function_coverage=1 00:05:36.867 --rc genhtml_legend=1 00:05:36.867 --rc geninfo_all_blocks=1 00:05:36.867 --rc geninfo_unexecuted_blocks=1 00:05:36.867 00:05:36.867 ' 00:05:36.867 09:26:36 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:36.867 09:26:36 env -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:36.867 09:26:36 env -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:36.867 09:26:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.867 ************************************ 00:05:36.867 START TEST env_memory 00:05:36.867 ************************************ 00:05:36.867 09:26:36 env.env_memory -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:36.867 00:05:36.867 00:05:36.867 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.867 http://cunit.sourceforge.net/ 00:05:36.867 00:05:36.867 00:05:36.867 Suite: memory 00:05:36.867 Test: alloc and free memory map ...[2024-10-07 09:26:36.391724] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:36.867 passed 00:05:36.867 Test: mem map translation ...[2024-10-07 09:26:36.417313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:36.867 [2024-10-07 09:26:36.417357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:36.867 [2024-10-07 09:26:36.417402] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:36.867 [2024-10-07 09:26:36.417409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:36.867 passed 00:05:36.867 Test: mem map registration ...[2024-10-07 09:26:36.472532] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:36.867 [2024-10-07 09:26:36.472553] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:36.867 passed 00:05:37.130 Test: mem map adjacent registrations ...passed 00:05:37.130 00:05:37.130 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.130 suites 1 1 n/a 0 0 00:05:37.130 tests 4 4 4 0 0 00:05:37.130 asserts 152 152 152 0 n/a 00:05:37.130 00:05:37.130 Elapsed time = 0.192 seconds 00:05:37.130 00:05:37.130 real 0m0.206s 00:05:37.130 user 0m0.193s 00:05:37.130 sys 0m0.013s 00:05:37.130 09:26:36 env.env_memory -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:37.130 09:26:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:37.130 ************************************ 00:05:37.130 END TEST env_memory 00:05:37.130 ************************************ 00:05:37.130 09:26:36 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.130 09:26:36 env -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:37.130 09:26:36 env -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:37.130 09:26:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.130 ************************************ 00:05:37.130 START TEST env_vtophys 00:05:37.130 ************************************ 00:05:37.130 09:26:36 env.env_vtophys -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.130 EAL: lib.eal log level changed from notice to debug 00:05:37.130 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.130 EAL: Detected lcore 1 as core 1 on socket 0 00:05:37.130 EAL: Detected lcore 2 as core 2 on socket 0 00:05:37.130 EAL: Detected lcore 3 as core 3 on socket 0 00:05:37.130 EAL: Detected lcore 4 as core 4 on socket 0 00:05:37.130 EAL: Detected lcore 5 as core 5 on socket 0 00:05:37.130 EAL: Detected lcore 6 as core 6 on socket 0 00:05:37.130 EAL: Detected lcore 7 as core 7 on socket 0 00:05:37.130 EAL: Detected lcore 8 as core 8 on socket 0 00:05:37.130 EAL: Detected lcore 9 as core 9 on socket 0 00:05:37.130 EAL: Detected lcore 10 as core 10 on socket 0 00:05:37.130 EAL: Detected lcore 11 as core 11 on socket 0 00:05:37.130 EAL: Detected lcore 12 as core 12 on socket 0 00:05:37.130 EAL: Detected lcore 13 as core 13 on socket 0 00:05:37.130 EAL: Detected lcore 14 as core 14 on socket 0 00:05:37.130 EAL: Detected lcore 15 as core 15 on socket 0 00:05:37.130 EAL: Detected lcore 16 as core 16 on socket 0 00:05:37.130 EAL: Detected lcore 17 as core 17 on socket 0 00:05:37.130 EAL: Detected lcore 18 as core 18 on socket 0 00:05:37.130 EAL: Detected lcore 19 as core 19 on socket 0 00:05:37.130 EAL: Detected lcore 20 as core 20 on socket 0 00:05:37.130 EAL: Detected lcore 21 as core 21 on socket 0 00:05:37.130 EAL: Detected lcore 22 as core 22 on socket 0 00:05:37.130 EAL: Detected lcore 23 as core 23 on socket 0 00:05:37.130 EAL: Detected lcore 24 as core 24 on socket 0 00:05:37.130 EAL: Detected lcore 25 as core 25 on socket 0 00:05:37.130 EAL: Detected lcore 26 as core 26 on socket 0 00:05:37.130 EAL: Detected lcore 27 as core 27 on socket 0 00:05:37.130 EAL: Detected lcore 28 as core 28 on socket 0 00:05:37.130 EAL: Detected lcore 29 as core 29 on socket 0 00:05:37.130 EAL: Detected lcore 30 as core 30 on socket 0 00:05:37.130 EAL: Detected lcore 31 as core 31 on socket 0 00:05:37.130 EAL: Detected lcore 32 as core 32 on socket 0 00:05:37.130 EAL: Detected lcore 33 as core 33 on socket 0 00:05:37.130 EAL: Detected lcore 34 as core 34 on socket 0 00:05:37.130 EAL: Detected lcore 35 as core 35 on socket 0 00:05:37.130 EAL: Detected lcore 36 as core 0 on socket 1 00:05:37.130 EAL: Detected lcore 37 as core 1 on socket 1 00:05:37.130 EAL: Detected lcore 38 as core 2 on socket 1 00:05:37.130 EAL: Detected lcore 39 as core 3 on socket 1 00:05:37.130 EAL: Detected lcore 40 as core 4 on socket 1 00:05:37.130 EAL: Detected lcore 41 as core 5 on socket 1 00:05:37.130 EAL: Detected lcore 42 as core 6 on socket 1 00:05:37.130 EAL: Detected lcore 43 as core 7 on socket 1 00:05:37.130 EAL: Detected lcore 44 as core 8 on socket 1 00:05:37.130 EAL: Detected lcore 45 as core 9 on socket 1 00:05:37.130 EAL: Detected lcore 46 as core 10 on socket 1 00:05:37.130 EAL: Detected lcore 47 as core 11 on socket 1 00:05:37.130 EAL: Detected lcore 48 as core 12 on socket 1 00:05:37.130 EAL: Detected lcore 49 as core 13 on socket 1 00:05:37.130 EAL: Detected lcore 50 as core 14 on socket 1 00:05:37.130 EAL: Detected lcore 51 as core 15 on socket 1 00:05:37.130 EAL: Detected lcore 52 as core 16 on socket 1 00:05:37.130 EAL: Detected lcore 53 as core 17 on socket 1 00:05:37.130 EAL: Detected lcore 54 as core 18 on socket 1 00:05:37.130 EAL: Detected lcore 55 as core 19 on socket 1 00:05:37.130 EAL: Detected lcore 56 as core 20 on socket 1 00:05:37.130 EAL: Detected lcore 57 as core 21 on socket 1 00:05:37.130 EAL: Detected lcore 58 as core 22 on socket 1 00:05:37.130 EAL: Detected lcore 59 as core 23 on socket 1 00:05:37.130 EAL: Detected lcore 60 as core 24 on socket 1 00:05:37.130 EAL: Detected lcore 61 as core 25 on socket 1 00:05:37.130 EAL: Detected lcore 62 as core 26 on socket 1 00:05:37.130 EAL: Detected lcore 63 as core 27 on socket 1 00:05:37.130 EAL: Detected lcore 64 as core 28 on socket 1 00:05:37.130 EAL: Detected lcore 65 as core 29 on socket 1 00:05:37.130 EAL: Detected lcore 66 as core 30 on socket 1 00:05:37.130 EAL: Detected lcore 67 as core 31 on socket 1 00:05:37.130 EAL: Detected lcore 68 as core 32 on socket 1 00:05:37.130 EAL: Detected lcore 69 as core 33 on socket 1 00:05:37.130 EAL: Detected lcore 70 as core 34 on socket 1 00:05:37.130 EAL: Detected lcore 71 as core 35 on socket 1 00:05:37.130 EAL: Detected lcore 72 as core 0 on socket 0 00:05:37.131 EAL: Detected lcore 73 as core 1 on socket 0 00:05:37.131 EAL: Detected lcore 74 as core 2 on socket 0 00:05:37.131 EAL: Detected lcore 75 as core 3 on socket 0 00:05:37.131 EAL: Detected lcore 76 as core 4 on socket 0 00:05:37.131 EAL: Detected lcore 77 as core 5 on socket 0 00:05:37.131 EAL: Detected lcore 78 as core 6 on socket 0 00:05:37.131 EAL: Detected lcore 79 as core 7 on socket 0 00:05:37.131 EAL: Detected lcore 80 as core 8 on socket 0 00:05:37.131 EAL: Detected lcore 81 as core 9 on socket 0 00:05:37.131 EAL: Detected lcore 82 as core 10 on socket 0 00:05:37.131 EAL: Detected lcore 83 as core 11 on socket 0 00:05:37.131 EAL: Detected lcore 84 as core 12 on socket 0 00:05:37.131 EAL: Detected lcore 85 as core 13 on socket 0 00:05:37.131 EAL: Detected lcore 86 as core 14 on socket 0 00:05:37.131 EAL: Detected lcore 87 as core 15 on socket 0 00:05:37.131 EAL: Detected lcore 88 as core 16 on socket 0 00:05:37.131 EAL: Detected lcore 89 as core 17 on socket 0 00:05:37.131 EAL: Detected lcore 90 as core 18 on socket 0 00:05:37.131 EAL: Detected lcore 91 as core 19 on socket 0 00:05:37.131 EAL: Detected lcore 92 as core 20 on socket 0 00:05:37.131 EAL: Detected lcore 93 as core 21 on socket 0 00:05:37.131 EAL: Detected lcore 94 as core 22 on socket 0 00:05:37.131 EAL: Detected lcore 95 as core 23 on socket 0 00:05:37.131 EAL: Detected lcore 96 as core 24 on socket 0 00:05:37.131 EAL: Detected lcore 97 as core 25 on socket 0 00:05:37.131 EAL: Detected lcore 98 as core 26 on socket 0 00:05:37.131 EAL: Detected lcore 99 as core 27 on socket 0 00:05:37.131 EAL: Detected lcore 100 as core 28 on socket 0 00:05:37.131 EAL: Detected lcore 101 as core 29 on socket 0 00:05:37.131 EAL: Detected lcore 102 as core 30 on socket 0 00:05:37.131 EAL: Detected lcore 103 as core 31 on socket 0 00:05:37.131 EAL: Detected lcore 104 as core 32 on socket 0 00:05:37.131 EAL: Detected lcore 105 as core 33 on socket 0 00:05:37.131 EAL: Detected lcore 106 as core 34 on socket 0 00:05:37.131 EAL: Detected lcore 107 as core 35 on socket 0 00:05:37.131 EAL: Detected lcore 108 as core 0 on socket 1 00:05:37.131 EAL: Detected lcore 109 as core 1 on socket 1 00:05:37.131 EAL: Detected lcore 110 as core 2 on socket 1 00:05:37.131 EAL: Detected lcore 111 as core 3 on socket 1 00:05:37.131 EAL: Detected lcore 112 as core 4 on socket 1 00:05:37.131 EAL: Detected lcore 113 as core 5 on socket 1 00:05:37.131 EAL: Detected lcore 114 as core 6 on socket 1 00:05:37.131 EAL: Detected lcore 115 as core 7 on socket 1 00:05:37.131 EAL: Detected lcore 116 as core 8 on socket 1 00:05:37.131 EAL: Detected lcore 117 as core 9 on socket 1 00:05:37.131 EAL: Detected lcore 118 as core 10 on socket 1 00:05:37.131 EAL: Detected lcore 119 as core 11 on socket 1 00:05:37.131 EAL: Detected lcore 120 as core 12 on socket 1 00:05:37.131 EAL: Detected lcore 121 as core 13 on socket 1 00:05:37.131 EAL: Detected lcore 122 as core 14 on socket 1 00:05:37.131 EAL: Detected lcore 123 as core 15 on socket 1 00:05:37.131 EAL: Detected lcore 124 as core 16 on socket 1 00:05:37.131 EAL: Detected lcore 125 as core 17 on socket 1 00:05:37.131 EAL: Detected lcore 126 as core 18 on socket 1 00:05:37.131 EAL: Detected lcore 127 as core 19 on socket 1 00:05:37.131 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:37.131 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:37.131 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:37.131 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:37.131 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:37.131 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:37.131 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:37.131 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:37.131 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:37.131 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:37.131 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:37.131 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:37.131 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:37.131 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:37.131 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:37.131 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:37.131 EAL: Maximum logical cores by configuration: 128 00:05:37.131 EAL: Detected CPU lcores: 128 00:05:37.131 EAL: Detected NUMA nodes: 2 00:05:37.131 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:37.131 EAL: Detected shared linkage of DPDK 00:05:37.131 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.131 EAL: Bus pci wants IOVA as 'DC' 00:05:37.131 EAL: Buses did not request a specific IOVA mode. 00:05:37.131 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:37.131 EAL: Selected IOVA mode 'VA' 00:05:37.131 EAL: Probing VFIO support... 00:05:37.131 EAL: IOMMU type 1 (Type 1) is supported 00:05:37.131 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:37.131 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:37.131 EAL: VFIO support initialized 00:05:37.131 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.131 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.131 EAL: Setting up physically contiguous memory... 00:05:37.131 EAL: Setting maximum number of open files to 524288 00:05:37.131 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.131 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:37.131 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.131 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.131 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.131 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.131 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.131 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.131 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.131 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.131 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.131 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.131 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.131 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.131 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.131 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.131 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.131 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.131 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.131 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.131 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.131 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.131 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.131 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.131 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.131 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.131 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.131 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:37.131 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.131 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:37.131 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.131 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.131 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:37.131 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:37.131 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.131 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:37.131 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.131 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.131 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:37.131 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:37.131 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.131 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:37.131 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.131 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.131 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:37.131 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:37.131 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.131 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:37.131 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.131 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.131 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:37.131 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:37.131 EAL: Hugepages will be freed exactly as allocated. 00:05:37.131 EAL: No shared files mode enabled, IPC is disabled 00:05:37.131 EAL: No shared files mode enabled, IPC is disabled 00:05:37.131 EAL: TSC frequency is ~2400000 KHz 00:05:37.131 EAL: Main lcore 0 is ready (tid=7f2a2f2d7a00;cpuset=[0]) 00:05:37.131 EAL: Trying to obtain current memory policy. 00:05:37.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.131 EAL: Restoring previous memory policy: 0 00:05:37.131 EAL: request: mp_malloc_sync 00:05:37.131 EAL: No shared files mode enabled, IPC is disabled 00:05:37.131 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.131 EAL: No shared files mode enabled, IPC is disabled 00:05:37.131 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.131 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.131 00:05:37.131 00:05:37.131 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.131 http://cunit.sourceforge.net/ 00:05:37.131 00:05:37.131 00:05:37.131 Suite: components_suite 00:05:37.131 Test: vtophys_malloc_test ...passed 00:05:37.131 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.131 EAL: Restoring previous memory policy: 4 00:05:37.131 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.131 EAL: request: mp_malloc_sync 00:05:37.131 EAL: No shared files mode enabled, IPC is disabled 00:05:37.131 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.131 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.131 EAL: request: mp_malloc_sync 00:05:37.131 EAL: No shared files mode enabled, IPC is disabled 00:05:37.131 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.131 EAL: Trying to obtain current memory policy. 00:05:37.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.131 EAL: Restoring previous memory policy: 4 00:05:37.131 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.131 EAL: request: mp_malloc_sync 00:05:37.131 EAL: No shared files mode enabled, IPC is disabled 00:05:37.131 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.131 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.131 EAL: request: mp_malloc_sync 00:05:37.131 EAL: No shared files mode enabled, IPC is disabled 00:05:37.131 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.131 EAL: Trying to obtain current memory policy. 00:05:37.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.131 EAL: Restoring previous memory policy: 4 00:05:37.131 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.131 EAL: request: mp_malloc_sync 00:05:37.131 EAL: No shared files mode enabled, IPC is disabled 00:05:37.131 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.131 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.131 EAL: request: mp_malloc_sync 00:05:37.131 EAL: No shared files mode enabled, IPC is disabled 00:05:37.131 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.131 EAL: Trying to obtain current memory policy. 00:05:37.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.131 EAL: Restoring previous memory policy: 4 00:05:37.131 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.132 EAL: request: mp_malloc_sync 00:05:37.132 EAL: No shared files mode enabled, IPC is disabled 00:05:37.132 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.132 EAL: request: mp_malloc_sync 00:05:37.132 EAL: No shared files mode enabled, IPC is disabled 00:05:37.132 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.132 EAL: Trying to obtain current memory policy. 00:05:37.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.132 EAL: Restoring previous memory policy: 4 00:05:37.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.132 EAL: request: mp_malloc_sync 00:05:37.132 EAL: No shared files mode enabled, IPC is disabled 00:05:37.132 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.132 EAL: request: mp_malloc_sync 00:05:37.132 EAL: No shared files mode enabled, IPC is disabled 00:05:37.132 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.132 EAL: Trying to obtain current memory policy. 00:05:37.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.132 EAL: Restoring previous memory policy: 4 00:05:37.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.132 EAL: request: mp_malloc_sync 00:05:37.132 EAL: No shared files mode enabled, IPC is disabled 00:05:37.132 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.132 EAL: request: mp_malloc_sync 00:05:37.132 EAL: No shared files mode enabled, IPC is disabled 00:05:37.132 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.132 EAL: Trying to obtain current memory policy. 00:05:37.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.392 EAL: Restoring previous memory policy: 4 00:05:37.393 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.393 EAL: request: mp_malloc_sync 00:05:37.393 EAL: No shared files mode enabled, IPC is disabled 00:05:37.393 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.393 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.393 EAL: request: mp_malloc_sync 00:05:37.393 EAL: No shared files mode enabled, IPC is disabled 00:05:37.393 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.393 EAL: Trying to obtain current memory policy. 00:05:37.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.393 EAL: Restoring previous memory policy: 4 00:05:37.393 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.393 EAL: request: mp_malloc_sync 00:05:37.393 EAL: No shared files mode enabled, IPC is disabled 00:05:37.393 EAL: Heap on socket 0 was expanded by 258MB 00:05:37.393 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.393 EAL: request: mp_malloc_sync 00:05:37.393 EAL: No shared files mode enabled, IPC is disabled 00:05:37.393 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.393 EAL: Trying to obtain current memory policy. 00:05:37.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.393 EAL: Restoring previous memory policy: 4 00:05:37.393 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.393 EAL: request: mp_malloc_sync 00:05:37.393 EAL: No shared files mode enabled, IPC is disabled 00:05:37.393 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.393 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.654 EAL: request: mp_malloc_sync 00:05:37.654 EAL: No shared files mode enabled, IPC is disabled 00:05:37.654 EAL: Heap on socket 0 was shrunk by 514MB 00:05:37.654 EAL: Trying to obtain current memory policy. 00:05:37.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.654 EAL: Restoring previous memory policy: 4 00:05:37.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.654 EAL: request: mp_malloc_sync 00:05:37.654 EAL: No shared files mode enabled, IPC is disabled 00:05:37.654 EAL: Heap on socket 0 was expanded by 1026MB 00:05:37.927 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.927 EAL: request: mp_malloc_sync 00:05:37.927 EAL: No shared files mode enabled, IPC is disabled 00:05:37.927 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:37.927 passed 00:05:37.927 00:05:37.927 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.927 suites 1 1 n/a 0 0 00:05:37.927 tests 2 2 2 0 0 00:05:37.927 asserts 497 497 497 0 n/a 00:05:37.927 00:05:37.927 Elapsed time = 0.685 seconds 00:05:37.927 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.927 EAL: request: mp_malloc_sync 00:05:37.927 EAL: No shared files mode enabled, IPC is disabled 00:05:37.927 EAL: Heap on socket 0 was shrunk by 2MB 00:05:37.927 EAL: No shared files mode enabled, IPC is disabled 00:05:37.927 EAL: No shared files mode enabled, IPC is disabled 00:05:37.927 EAL: No shared files mode enabled, IPC is disabled 00:05:37.927 00:05:37.927 real 0m0.826s 00:05:37.927 user 0m0.435s 00:05:37.927 sys 0m0.363s 00:05:37.927 09:26:37 env.env_vtophys -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:37.927 09:26:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:37.927 ************************************ 00:05:37.927 END TEST env_vtophys 00:05:37.927 ************************************ 00:05:37.927 09:26:37 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:37.927 09:26:37 env -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:37.927 09:26:37 env -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:37.927 09:26:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.927 ************************************ 00:05:37.927 START TEST env_pci 00:05:37.927 ************************************ 00:05:37.927 09:26:37 env.env_pci -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:37.927 00:05:37.927 00:05:37.927 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.927 http://cunit.sourceforge.net/ 00:05:37.927 00:05:37.927 00:05:37.927 Suite: pci 00:05:37.927 Test: pci_hook ...[2024-10-07 09:26:37.544741] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3116864 has claimed it 00:05:37.927 EAL: Cannot find device (10000:00:01.0) 00:05:37.927 EAL: Failed to attach device on primary process 00:05:37.927 passed 00:05:37.927 00:05:37.927 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.927 suites 1 1 n/a 0 0 00:05:37.927 tests 1 1 1 0 0 00:05:37.927 asserts 25 25 25 0 n/a 00:05:37.927 00:05:37.927 Elapsed time = 0.032 seconds 00:05:37.927 00:05:37.927 real 0m0.053s 00:05:37.927 user 0m0.011s 00:05:37.927 sys 0m0.042s 00:05:37.928 09:26:37 env.env_pci -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:37.928 09:26:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:37.928 ************************************ 00:05:37.928 END TEST env_pci 00:05:37.928 ************************************ 00:05:38.306 09:26:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.306 09:26:37 env -- env/env.sh@15 -- # uname 00:05:38.306 09:26:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.306 09:26:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.306 09:26:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.306 09:26:37 env -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:05:38.306 09:26:37 env -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:38.306 09:26:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.306 ************************************ 00:05:38.306 START TEST env_dpdk_post_init 00:05:38.306 ************************************ 00:05:38.306 09:26:37 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.306 EAL: Detected CPU lcores: 128 00:05:38.306 EAL: Detected NUMA nodes: 2 00:05:38.306 EAL: Detected shared linkage of DPDK 00:05:38.306 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.306 EAL: Selected IOVA mode 'VA' 00:05:38.306 EAL: VFIO support initialized 00:05:38.306 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.306 EAL: Using IOMMU type 1 (Type 1) 00:05:38.590 EAL: Ignore mapping IO port bar(1) 00:05:38.590 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:38.590 EAL: Ignore mapping IO port bar(1) 00:05:38.874 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:38.874 EAL: Ignore mapping IO port bar(1) 00:05:38.874 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:39.178 EAL: Ignore mapping IO port bar(1) 00:05:39.178 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:39.178 EAL: Ignore mapping IO port bar(1) 00:05:39.471 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:39.471 EAL: Ignore mapping IO port bar(1) 00:05:39.733 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:39.733 EAL: Ignore mapping IO port bar(1) 00:05:39.733 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:39.994 EAL: Ignore mapping IO port bar(1) 00:05:39.994 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:40.255 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:40.516 EAL: Ignore mapping IO port bar(1) 00:05:40.516 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:40.516 EAL: Ignore mapping IO port bar(1) 00:05:40.777 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:40.777 EAL: Ignore mapping IO port bar(1) 00:05:41.039 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:41.039 EAL: Ignore mapping IO port bar(1) 00:05:41.301 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:41.301 EAL: Ignore mapping IO port bar(1) 00:05:41.301 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:41.562 EAL: Ignore mapping IO port bar(1) 00:05:41.562 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:41.825 EAL: Ignore mapping IO port bar(1) 00:05:41.825 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:42.086 EAL: Ignore mapping IO port bar(1) 00:05:42.086 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:42.086 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:42.086 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:42.086 Starting DPDK initialization... 00:05:42.086 Starting SPDK post initialization... 00:05:42.086 SPDK NVMe probe 00:05:42.086 Attaching to 0000:65:00.0 00:05:42.086 Attached to 0000:65:00.0 00:05:42.086 Cleaning up... 00:05:44.005 00:05:44.005 real 0m5.743s 00:05:44.005 user 0m0.098s 00:05:44.005 sys 0m0.200s 00:05:44.005 09:26:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:44.005 09:26:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.005 ************************************ 00:05:44.005 END TEST env_dpdk_post_init 00:05:44.005 ************************************ 00:05:44.005 09:26:43 env -- env/env.sh@26 -- # uname 00:05:44.005 09:26:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:44.005 09:26:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.005 09:26:43 env -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:44.005 09:26:43 env -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:44.005 09:26:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.005 ************************************ 00:05:44.005 START TEST env_mem_callbacks 00:05:44.005 ************************************ 00:05:44.005 09:26:43 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.005 EAL: Detected CPU lcores: 128 00:05:44.005 EAL: Detected NUMA nodes: 2 00:05:44.005 EAL: Detected shared linkage of DPDK 00:05:44.005 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.005 EAL: Selected IOVA mode 'VA' 00:05:44.005 EAL: VFIO support initialized 00:05:44.005 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.005 00:05:44.005 00:05:44.005 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.005 http://cunit.sourceforge.net/ 00:05:44.005 00:05:44.005 00:05:44.005 Suite: memory 00:05:44.005 Test: test ... 00:05:44.005 register 0x200000200000 2097152 00:05:44.005 malloc 3145728 00:05:44.005 register 0x200000400000 4194304 00:05:44.005 buf 0x200000500000 len 3145728 PASSED 00:05:44.005 malloc 64 00:05:44.005 buf 0x2000004fff40 len 64 PASSED 00:05:44.005 malloc 4194304 00:05:44.005 register 0x200000800000 6291456 00:05:44.005 buf 0x200000a00000 len 4194304 PASSED 00:05:44.005 free 0x200000500000 3145728 00:05:44.005 free 0x2000004fff40 64 00:05:44.005 unregister 0x200000400000 4194304 PASSED 00:05:44.005 free 0x200000a00000 4194304 00:05:44.005 unregister 0x200000800000 6291456 PASSED 00:05:44.005 malloc 8388608 00:05:44.005 register 0x200000400000 10485760 00:05:44.005 buf 0x200000600000 len 8388608 PASSED 00:05:44.005 free 0x200000600000 8388608 00:05:44.005 unregister 0x200000400000 10485760 PASSED 00:05:44.005 passed 00:05:44.005 00:05:44.005 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.005 suites 1 1 n/a 0 0 00:05:44.005 tests 1 1 1 0 0 00:05:44.005 asserts 15 15 15 0 n/a 00:05:44.005 00:05:44.005 Elapsed time = 0.010 seconds 00:05:44.005 00:05:44.005 real 0m0.072s 00:05:44.005 user 0m0.029s 00:05:44.005 sys 0m0.042s 00:05:44.005 09:26:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:44.005 09:26:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:44.005 ************************************ 00:05:44.005 END TEST env_mem_callbacks 00:05:44.005 ************************************ 00:05:44.005 00:05:44.005 real 0m7.553s 00:05:44.005 user 0m1.044s 00:05:44.005 sys 0m1.074s 00:05:44.005 09:26:43 env -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:44.005 09:26:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.005 ************************************ 00:05:44.005 END TEST env 00:05:44.005 ************************************ 00:05:44.005 09:26:43 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.005 09:26:43 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:44.005 09:26:43 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:44.006 09:26:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.267 ************************************ 00:05:44.267 START TEST rpc 00:05:44.267 ************************************ 00:05:44.267 09:26:43 rpc -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.267 * Looking for test storage... 00:05:44.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.267 09:26:43 rpc -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:05:44.267 09:26:43 rpc -- common/autotest_common.sh@1626 -- # lcov --version 00:05:44.267 09:26:43 rpc -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:05:44.267 09:26:43 rpc -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:05:44.267 09:26:43 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.267 09:26:43 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.267 09:26:43 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.267 09:26:43 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.267 09:26:43 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.267 09:26:43 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.267 09:26:43 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.267 09:26:43 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.267 09:26:43 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.267 09:26:43 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.267 09:26:43 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.267 09:26:43 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.267 09:26:43 rpc -- scripts/common.sh@345 -- # : 1 00:05:44.267 09:26:43 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.267 09:26:43 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.267 09:26:43 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.267 09:26:43 rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.267 09:26:43 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.267 09:26:43 rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.267 09:26:43 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.267 09:26:43 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.267 09:26:43 rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.267 09:26:43 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.267 09:26:43 rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.528 09:26:43 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.528 09:26:43 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.528 09:26:43 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.528 09:26:43 rpc -- scripts/common.sh@368 -- # return 0 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:05:44.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.528 --rc genhtml_branch_coverage=1 00:05:44.528 --rc genhtml_function_coverage=1 00:05:44.528 --rc genhtml_legend=1 00:05:44.528 --rc geninfo_all_blocks=1 00:05:44.528 --rc geninfo_unexecuted_blocks=1 00:05:44.528 00:05:44.528 ' 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:05:44.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.528 --rc genhtml_branch_coverage=1 00:05:44.528 --rc genhtml_function_coverage=1 00:05:44.528 --rc genhtml_legend=1 00:05:44.528 --rc geninfo_all_blocks=1 00:05:44.528 --rc geninfo_unexecuted_blocks=1 00:05:44.528 00:05:44.528 ' 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:05:44.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.528 --rc genhtml_branch_coverage=1 00:05:44.528 --rc genhtml_function_coverage=1 00:05:44.528 --rc genhtml_legend=1 00:05:44.528 --rc geninfo_all_blocks=1 00:05:44.528 --rc geninfo_unexecuted_blocks=1 00:05:44.528 00:05:44.528 ' 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:05:44.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.528 --rc genhtml_branch_coverage=1 00:05:44.528 --rc genhtml_function_coverage=1 00:05:44.528 --rc genhtml_legend=1 00:05:44.528 --rc geninfo_all_blocks=1 00:05:44.528 --rc geninfo_unexecuted_blocks=1 00:05:44.528 00:05:44.528 ' 00:05:44.528 09:26:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3118229 00:05:44.528 09:26:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.528 09:26:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3118229 00:05:44.528 09:26:43 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@834 -- # '[' -z 3118229 ']' 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:05:44.528 09:26:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.528 [2024-10-07 09:26:43.992305] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:05:44.528 [2024-10-07 09:26:43.992376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3118229 ] 00:05:44.528 [2024-10-07 09:26:44.076956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.528 [2024-10-07 09:26:44.173015] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.528 [2024-10-07 09:26:44.173078] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3118229' to capture a snapshot of events at runtime. 00:05:44.528 [2024-10-07 09:26:44.173090] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.528 [2024-10-07 09:26:44.173099] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.528 [2024-10-07 09:26:44.173107] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3118229 for offline analysis/debug. 00:05:44.528 [2024-10-07 09:26:44.173890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.471 09:26:44 rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:05:45.471 09:26:44 rpc -- common/autotest_common.sh@867 -- # return 0 00:05:45.471 09:26:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.471 09:26:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.471 09:26:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.471 09:26:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.471 09:26:44 rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:45.471 09:26:44 rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:45.471 09:26:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.471 ************************************ 00:05:45.471 START TEST rpc_integrity 00:05:45.471 ************************************ 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # rpc_integrity 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.471 { 00:05:45.471 "name": "Malloc0", 00:05:45.471 "aliases": [ 00:05:45.471 "bbc1bf32-66dc-4f0c-9a66-82433c9c997b" 00:05:45.471 ], 00:05:45.471 "product_name": "Malloc disk", 00:05:45.471 "block_size": 512, 00:05:45.471 "num_blocks": 16384, 00:05:45.471 "uuid": "bbc1bf32-66dc-4f0c-9a66-82433c9c997b", 00:05:45.471 "assigned_rate_limits": { 00:05:45.471 "rw_ios_per_sec": 0, 00:05:45.471 "rw_mbytes_per_sec": 0, 00:05:45.471 "r_mbytes_per_sec": 0, 00:05:45.471 "w_mbytes_per_sec": 0 00:05:45.471 }, 00:05:45.471 "claimed": false, 00:05:45.471 "zoned": false, 00:05:45.471 "supported_io_types": { 00:05:45.471 "read": true, 00:05:45.471 "write": true, 00:05:45.471 "unmap": true, 00:05:45.471 "flush": true, 00:05:45.471 "reset": true, 00:05:45.471 "nvme_admin": false, 00:05:45.471 "nvme_io": false, 00:05:45.471 "nvme_io_md": false, 00:05:45.471 "write_zeroes": true, 00:05:45.471 "zcopy": true, 00:05:45.471 "get_zone_info": false, 00:05:45.471 "zone_management": false, 00:05:45.471 "zone_append": false, 00:05:45.471 "compare": false, 00:05:45.471 "compare_and_write": false, 00:05:45.471 "abort": true, 00:05:45.471 "seek_hole": false, 00:05:45.471 "seek_data": false, 00:05:45.471 "copy": true, 00:05:45.471 "nvme_iov_md": false 00:05:45.471 }, 00:05:45.471 "memory_domains": [ 00:05:45.471 { 00:05:45.471 "dma_device_id": "system", 00:05:45.471 "dma_device_type": 1 00:05:45.471 }, 00:05:45.471 { 00:05:45.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.471 "dma_device_type": 2 00:05:45.471 } 00:05:45.471 ], 00:05:45.471 "driver_specific": {} 00:05:45.471 } 00:05:45.471 ]' 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.471 09:26:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.471 09:26:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.471 [2024-10-07 09:26:44.997289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.471 [2024-10-07 09:26:44.997336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.471 [2024-10-07 09:26:44.997351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a69e00 00:05:45.471 [2024-10-07 09:26:44.997359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.471 [2024-10-07 09:26:44.998959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.471 [2024-10-07 09:26:44.998996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.471 Passthru0 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.471 09:26:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.471 09:26:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.471 { 00:05:45.471 "name": "Malloc0", 00:05:45.471 "aliases": [ 00:05:45.471 "bbc1bf32-66dc-4f0c-9a66-82433c9c997b" 00:05:45.471 ], 00:05:45.471 "product_name": "Malloc disk", 00:05:45.471 "block_size": 512, 00:05:45.471 "num_blocks": 16384, 00:05:45.471 "uuid": "bbc1bf32-66dc-4f0c-9a66-82433c9c997b", 00:05:45.471 "assigned_rate_limits": { 00:05:45.471 "rw_ios_per_sec": 0, 00:05:45.471 "rw_mbytes_per_sec": 0, 00:05:45.471 "r_mbytes_per_sec": 0, 00:05:45.471 "w_mbytes_per_sec": 0 00:05:45.471 }, 00:05:45.471 "claimed": true, 00:05:45.471 "claim_type": "exclusive_write", 00:05:45.471 "zoned": false, 00:05:45.471 "supported_io_types": { 00:05:45.471 "read": true, 00:05:45.471 "write": true, 00:05:45.471 "unmap": true, 00:05:45.471 "flush": true, 00:05:45.471 "reset": true, 00:05:45.471 "nvme_admin": false, 00:05:45.471 "nvme_io": false, 00:05:45.471 "nvme_io_md": false, 00:05:45.471 "write_zeroes": true, 00:05:45.471 "zcopy": true, 00:05:45.471 "get_zone_info": false, 00:05:45.471 "zone_management": false, 00:05:45.471 "zone_append": false, 00:05:45.471 "compare": false, 00:05:45.471 "compare_and_write": false, 00:05:45.471 "abort": true, 00:05:45.471 "seek_hole": false, 00:05:45.471 "seek_data": false, 00:05:45.471 "copy": true, 00:05:45.471 "nvme_iov_md": false 00:05:45.471 }, 00:05:45.471 "memory_domains": [ 00:05:45.471 { 00:05:45.471 "dma_device_id": "system", 00:05:45.471 "dma_device_type": 1 00:05:45.471 }, 00:05:45.471 { 00:05:45.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.471 "dma_device_type": 2 00:05:45.471 } 00:05:45.471 ], 00:05:45.471 "driver_specific": {} 00:05:45.471 }, 00:05:45.471 { 00:05:45.471 "name": "Passthru0", 00:05:45.471 "aliases": [ 00:05:45.471 "8eb35e0f-aabf-599d-bc15-bd7273def371" 00:05:45.471 ], 00:05:45.471 "product_name": "passthru", 00:05:45.471 "block_size": 512, 00:05:45.471 "num_blocks": 16384, 00:05:45.471 "uuid": "8eb35e0f-aabf-599d-bc15-bd7273def371", 00:05:45.471 "assigned_rate_limits": { 00:05:45.471 "rw_ios_per_sec": 0, 00:05:45.471 "rw_mbytes_per_sec": 0, 00:05:45.471 "r_mbytes_per_sec": 0, 00:05:45.471 "w_mbytes_per_sec": 0 00:05:45.471 }, 00:05:45.471 "claimed": false, 00:05:45.471 "zoned": false, 00:05:45.471 "supported_io_types": { 00:05:45.471 "read": true, 00:05:45.471 "write": true, 00:05:45.471 "unmap": true, 00:05:45.471 "flush": true, 00:05:45.471 "reset": true, 00:05:45.471 "nvme_admin": false, 00:05:45.471 "nvme_io": false, 00:05:45.471 "nvme_io_md": false, 00:05:45.471 "write_zeroes": true, 00:05:45.471 "zcopy": true, 00:05:45.471 "get_zone_info": false, 00:05:45.471 "zone_management": false, 00:05:45.471 "zone_append": false, 00:05:45.471 "compare": false, 00:05:45.471 "compare_and_write": false, 00:05:45.471 "abort": true, 00:05:45.471 "seek_hole": false, 00:05:45.471 "seek_data": false, 00:05:45.471 "copy": true, 00:05:45.471 "nvme_iov_md": false 00:05:45.471 }, 00:05:45.471 "memory_domains": [ 00:05:45.471 { 00:05:45.471 "dma_device_id": "system", 00:05:45.471 "dma_device_type": 1 00:05:45.471 }, 00:05:45.471 { 00:05:45.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.471 "dma_device_type": 2 00:05:45.471 } 00:05:45.471 ], 00:05:45.471 "driver_specific": { 00:05:45.471 "passthru": { 00:05:45.471 "name": "Passthru0", 00:05:45.471 "base_bdev_name": "Malloc0" 00:05:45.471 } 00:05:45.471 } 00:05:45.471 } 00:05:45.471 ]' 00:05:45.471 09:26:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.471 09:26:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.471 09:26:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.471 09:26:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.471 09:26:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.471 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.472 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.472 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.472 09:26:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.472 09:26:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.733 09:26:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.733 00:05:45.733 real 0m0.298s 00:05:45.733 user 0m0.191s 00:05:45.733 sys 0m0.038s 00:05:45.733 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:45.733 09:26:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.733 ************************************ 00:05:45.733 END TEST rpc_integrity 00:05:45.733 ************************************ 00:05:45.733 09:26:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.733 09:26:45 rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:45.733 09:26:45 rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:45.733 09:26:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.733 ************************************ 00:05:45.733 START TEST rpc_plugins 00:05:45.733 ************************************ 00:05:45.733 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # rpc_plugins 00:05:45.733 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.733 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.733 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.733 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.733 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.733 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.733 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.733 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.733 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.733 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.733 { 00:05:45.733 "name": "Malloc1", 00:05:45.733 "aliases": [ 00:05:45.733 "d4583973-6ec4-4a1a-a23e-d4c667055222" 00:05:45.733 ], 00:05:45.733 "product_name": "Malloc disk", 00:05:45.733 "block_size": 4096, 00:05:45.733 "num_blocks": 256, 00:05:45.733 "uuid": "d4583973-6ec4-4a1a-a23e-d4c667055222", 00:05:45.733 "assigned_rate_limits": { 00:05:45.733 "rw_ios_per_sec": 0, 00:05:45.733 "rw_mbytes_per_sec": 0, 00:05:45.733 "r_mbytes_per_sec": 0, 00:05:45.733 "w_mbytes_per_sec": 0 00:05:45.733 }, 00:05:45.733 "claimed": false, 00:05:45.733 "zoned": false, 00:05:45.733 "supported_io_types": { 00:05:45.733 "read": true, 00:05:45.733 "write": true, 00:05:45.733 "unmap": true, 00:05:45.733 "flush": true, 00:05:45.733 "reset": true, 00:05:45.733 "nvme_admin": false, 00:05:45.733 "nvme_io": false, 00:05:45.733 "nvme_io_md": false, 00:05:45.733 "write_zeroes": true, 00:05:45.733 "zcopy": true, 00:05:45.733 "get_zone_info": false, 00:05:45.733 "zone_management": false, 00:05:45.733 "zone_append": false, 00:05:45.733 "compare": false, 00:05:45.733 "compare_and_write": false, 00:05:45.733 "abort": true, 00:05:45.733 "seek_hole": false, 00:05:45.733 "seek_data": false, 00:05:45.733 "copy": true, 00:05:45.733 "nvme_iov_md": false 00:05:45.733 }, 00:05:45.733 "memory_domains": [ 00:05:45.733 { 00:05:45.733 "dma_device_id": "system", 00:05:45.733 "dma_device_type": 1 00:05:45.733 }, 00:05:45.733 { 00:05:45.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.733 "dma_device_type": 2 00:05:45.733 } 00:05:45.734 ], 00:05:45.734 "driver_specific": {} 00:05:45.734 } 00:05:45.734 ]' 00:05:45.734 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:45.734 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.734 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.734 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.734 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.734 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.734 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.734 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.734 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.734 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.734 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.734 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:45.734 09:26:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.734 00:05:45.734 real 0m0.150s 00:05:45.734 user 0m0.091s 00:05:45.734 sys 0m0.024s 00:05:45.734 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:45.734 09:26:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.734 ************************************ 00:05:45.734 END TEST rpc_plugins 00:05:45.734 ************************************ 00:05:45.996 09:26:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.996 09:26:45 rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:45.996 09:26:45 rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:45.996 09:26:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 ************************************ 00:05:45.996 START TEST rpc_trace_cmd_test 00:05:45.996 ************************************ 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # rpc_trace_cmd_test 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:45.996 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3118229", 00:05:45.996 "tpoint_group_mask": "0x8", 00:05:45.996 "iscsi_conn": { 00:05:45.996 "mask": "0x2", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "scsi": { 00:05:45.996 "mask": "0x4", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "bdev": { 00:05:45.996 "mask": "0x8", 00:05:45.996 "tpoint_mask": "0xffffffffffffffff" 00:05:45.996 }, 00:05:45.996 "nvmf_rdma": { 00:05:45.996 "mask": "0x10", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "nvmf_tcp": { 00:05:45.996 "mask": "0x20", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "ftl": { 00:05:45.996 "mask": "0x40", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "blobfs": { 00:05:45.996 "mask": "0x80", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "dsa": { 00:05:45.996 "mask": "0x200", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "thread": { 00:05:45.996 "mask": "0x400", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "nvme_pcie": { 00:05:45.996 "mask": "0x800", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "iaa": { 00:05:45.996 "mask": "0x1000", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "nvme_tcp": { 00:05:45.996 "mask": "0x2000", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "bdev_nvme": { 00:05:45.996 "mask": "0x4000", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "sock": { 00:05:45.996 "mask": "0x8000", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "blob": { 00:05:45.996 "mask": "0x10000", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "bdev_raid": { 00:05:45.996 "mask": "0x20000", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 }, 00:05:45.996 "scheduler": { 00:05:45.996 "mask": "0x40000", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.996 } 00:05:45.996 }' 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.996 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.258 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.258 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.258 09:26:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.258 00:05:46.258 real 0m0.254s 00:05:46.258 user 0m0.213s 00:05:46.258 sys 0m0.029s 00:05:46.258 09:26:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:46.258 09:26:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.258 ************************************ 00:05:46.258 END TEST rpc_trace_cmd_test 00:05:46.258 ************************************ 00:05:46.258 09:26:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.258 09:26:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.258 09:26:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.258 09:26:45 rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:46.258 09:26:45 rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:46.258 09:26:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.258 ************************************ 00:05:46.258 START TEST rpc_daemon_integrity 00:05:46.258 ************************************ 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # rpc_integrity 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.258 { 00:05:46.258 "name": "Malloc2", 00:05:46.258 "aliases": [ 00:05:46.258 "33229cad-7c30-4384-a19b-9768faf74143" 00:05:46.258 ], 00:05:46.258 "product_name": "Malloc disk", 00:05:46.258 "block_size": 512, 00:05:46.258 "num_blocks": 16384, 00:05:46.258 "uuid": "33229cad-7c30-4384-a19b-9768faf74143", 00:05:46.258 "assigned_rate_limits": { 00:05:46.258 "rw_ios_per_sec": 0, 00:05:46.258 "rw_mbytes_per_sec": 0, 00:05:46.258 "r_mbytes_per_sec": 0, 00:05:46.258 "w_mbytes_per_sec": 0 00:05:46.258 }, 00:05:46.258 "claimed": false, 00:05:46.258 "zoned": false, 00:05:46.258 "supported_io_types": { 00:05:46.258 "read": true, 00:05:46.258 "write": true, 00:05:46.258 "unmap": true, 00:05:46.258 "flush": true, 00:05:46.258 "reset": true, 00:05:46.258 "nvme_admin": false, 00:05:46.258 "nvme_io": false, 00:05:46.258 "nvme_io_md": false, 00:05:46.258 "write_zeroes": true, 00:05:46.258 "zcopy": true, 00:05:46.258 "get_zone_info": false, 00:05:46.258 "zone_management": false, 00:05:46.258 "zone_append": false, 00:05:46.258 "compare": false, 00:05:46.258 "compare_and_write": false, 00:05:46.258 "abort": true, 00:05:46.258 "seek_hole": false, 00:05:46.258 "seek_data": false, 00:05:46.258 "copy": true, 00:05:46.258 "nvme_iov_md": false 00:05:46.258 }, 00:05:46.258 "memory_domains": [ 00:05:46.258 { 00:05:46.258 "dma_device_id": "system", 00:05:46.258 "dma_device_type": 1 00:05:46.258 }, 00:05:46.258 { 00:05:46.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.258 "dma_device_type": 2 00:05:46.258 } 00:05:46.258 ], 00:05:46.258 "driver_specific": {} 00:05:46.258 } 00:05:46.258 ]' 00:05:46.258 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.520 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.520 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.520 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:46.520 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.520 [2024-10-07 09:26:45.943909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.520 [2024-10-07 09:26:45.943954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.520 [2024-10-07 09:26:45.943968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a6a080 00:05:46.520 [2024-10-07 09:26:45.943978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.520 [2024-10-07 09:26:45.945446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.520 [2024-10-07 09:26:45.945481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.520 Passthru0 00:05:46.520 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:46.520 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.520 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:46.520 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.520 09:26:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:46.520 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.520 { 00:05:46.520 "name": "Malloc2", 00:05:46.520 "aliases": [ 00:05:46.520 "33229cad-7c30-4384-a19b-9768faf74143" 00:05:46.520 ], 00:05:46.520 "product_name": "Malloc disk", 00:05:46.520 "block_size": 512, 00:05:46.520 "num_blocks": 16384, 00:05:46.520 "uuid": "33229cad-7c30-4384-a19b-9768faf74143", 00:05:46.520 "assigned_rate_limits": { 00:05:46.520 "rw_ios_per_sec": 0, 00:05:46.520 "rw_mbytes_per_sec": 0, 00:05:46.520 "r_mbytes_per_sec": 0, 00:05:46.520 "w_mbytes_per_sec": 0 00:05:46.520 }, 00:05:46.520 "claimed": true, 00:05:46.520 "claim_type": "exclusive_write", 00:05:46.520 "zoned": false, 00:05:46.520 "supported_io_types": { 00:05:46.520 "read": true, 00:05:46.520 "write": true, 00:05:46.520 "unmap": true, 00:05:46.520 "flush": true, 00:05:46.520 "reset": true, 00:05:46.520 "nvme_admin": false, 00:05:46.520 "nvme_io": false, 00:05:46.520 "nvme_io_md": false, 00:05:46.520 "write_zeroes": true, 00:05:46.520 "zcopy": true, 00:05:46.520 "get_zone_info": false, 00:05:46.520 "zone_management": false, 00:05:46.520 "zone_append": false, 00:05:46.520 "compare": false, 00:05:46.520 "compare_and_write": false, 00:05:46.520 "abort": true, 00:05:46.520 "seek_hole": false, 00:05:46.520 "seek_data": false, 00:05:46.520 "copy": true, 00:05:46.520 "nvme_iov_md": false 00:05:46.520 }, 00:05:46.520 "memory_domains": [ 00:05:46.520 { 00:05:46.520 "dma_device_id": "system", 00:05:46.520 "dma_device_type": 1 00:05:46.520 }, 00:05:46.520 { 00:05:46.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.520 "dma_device_type": 2 00:05:46.520 } 00:05:46.520 ], 00:05:46.520 "driver_specific": {} 00:05:46.520 }, 00:05:46.520 { 00:05:46.520 "name": "Passthru0", 00:05:46.520 "aliases": [ 00:05:46.520 "f7941a46-1734-548f-acf7-06a234db8897" 00:05:46.520 ], 00:05:46.520 "product_name": "passthru", 00:05:46.520 "block_size": 512, 00:05:46.520 "num_blocks": 16384, 00:05:46.520 "uuid": "f7941a46-1734-548f-acf7-06a234db8897", 00:05:46.520 "assigned_rate_limits": { 00:05:46.520 "rw_ios_per_sec": 0, 00:05:46.520 "rw_mbytes_per_sec": 0, 00:05:46.520 "r_mbytes_per_sec": 0, 00:05:46.520 "w_mbytes_per_sec": 0 00:05:46.520 }, 00:05:46.520 "claimed": false, 00:05:46.520 "zoned": false, 00:05:46.520 "supported_io_types": { 00:05:46.520 "read": true, 00:05:46.520 "write": true, 00:05:46.520 "unmap": true, 00:05:46.520 "flush": true, 00:05:46.520 "reset": true, 00:05:46.520 "nvme_admin": false, 00:05:46.520 "nvme_io": false, 00:05:46.520 "nvme_io_md": false, 00:05:46.520 "write_zeroes": true, 00:05:46.520 "zcopy": true, 00:05:46.520 "get_zone_info": false, 00:05:46.520 "zone_management": false, 00:05:46.520 "zone_append": false, 00:05:46.520 "compare": false, 00:05:46.520 "compare_and_write": false, 00:05:46.520 "abort": true, 00:05:46.520 "seek_hole": false, 00:05:46.520 "seek_data": false, 00:05:46.520 "copy": true, 00:05:46.520 "nvme_iov_md": false 00:05:46.520 }, 00:05:46.520 "memory_domains": [ 00:05:46.520 { 00:05:46.520 "dma_device_id": "system", 00:05:46.520 "dma_device_type": 1 00:05:46.520 }, 00:05:46.520 { 00:05:46.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.520 "dma_device_type": 2 00:05:46.520 } 00:05:46.520 ], 00:05:46.520 "driver_specific": { 00:05:46.520 "passthru": { 00:05:46.520 "name": "Passthru0", 00:05:46.520 "base_bdev_name": "Malloc2" 00:05:46.520 } 00:05:46.520 } 00:05:46.520 } 00:05:46.520 ]' 00:05:46.521 09:26:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.521 00:05:46.521 real 0m0.294s 00:05:46.521 user 0m0.177s 00:05:46.521 sys 0m0.047s 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:46.521 09:26:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.521 ************************************ 00:05:46.521 END TEST rpc_daemon_integrity 00:05:46.521 ************************************ 00:05:46.521 09:26:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.521 09:26:46 rpc -- rpc/rpc.sh@84 -- # killprocess 3118229 00:05:46.521 09:26:46 rpc -- common/autotest_common.sh@953 -- # '[' -z 3118229 ']' 00:05:46.521 09:26:46 rpc -- common/autotest_common.sh@957 -- # kill -0 3118229 00:05:46.521 09:26:46 rpc -- common/autotest_common.sh@958 -- # uname 00:05:46.521 09:26:46 rpc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:05:46.521 09:26:46 rpc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3118229 00:05:46.782 09:26:46 rpc -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:05:46.782 09:26:46 rpc -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:05:46.782 09:26:46 rpc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3118229' 00:05:46.782 killing process with pid 3118229 00:05:46.782 09:26:46 rpc -- common/autotest_common.sh@972 -- # kill 3118229 00:05:46.782 09:26:46 rpc -- common/autotest_common.sh@977 -- # wait 3118229 00:05:47.043 00:05:47.043 real 0m2.771s 00:05:47.043 user 0m3.496s 00:05:47.043 sys 0m0.865s 00:05:47.043 09:26:46 rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:47.043 09:26:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 ************************************ 00:05:47.043 END TEST rpc 00:05:47.043 ************************************ 00:05:47.043 09:26:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:47.043 09:26:46 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:47.043 09:26:46 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:47.043 09:26:46 -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 ************************************ 00:05:47.043 START TEST skip_rpc 00:05:47.043 ************************************ 00:05:47.043 09:26:46 skip_rpc -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:47.043 * Looking for test storage... 00:05:47.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.043 09:26:46 skip_rpc -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:05:47.043 09:26:46 skip_rpc -- common/autotest_common.sh@1626 -- # lcov --version 00:05:47.043 09:26:46 skip_rpc -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:05:47.305 09:26:46 skip_rpc -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.305 09:26:46 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:47.305 09:26:46 skip_rpc -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.305 09:26:46 skip_rpc -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:05:47.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.305 --rc genhtml_branch_coverage=1 00:05:47.305 --rc genhtml_function_coverage=1 00:05:47.305 --rc genhtml_legend=1 00:05:47.305 --rc geninfo_all_blocks=1 00:05:47.305 --rc geninfo_unexecuted_blocks=1 00:05:47.305 00:05:47.305 ' 00:05:47.305 09:26:46 skip_rpc -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:05:47.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.305 --rc genhtml_branch_coverage=1 00:05:47.305 --rc genhtml_function_coverage=1 00:05:47.305 --rc genhtml_legend=1 00:05:47.305 --rc geninfo_all_blocks=1 00:05:47.305 --rc geninfo_unexecuted_blocks=1 00:05:47.305 00:05:47.305 ' 00:05:47.305 09:26:46 skip_rpc -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:05:47.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.305 --rc genhtml_branch_coverage=1 00:05:47.305 --rc genhtml_function_coverage=1 00:05:47.305 --rc genhtml_legend=1 00:05:47.305 --rc geninfo_all_blocks=1 00:05:47.305 --rc geninfo_unexecuted_blocks=1 00:05:47.305 00:05:47.305 ' 00:05:47.305 09:26:46 skip_rpc -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:05:47.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.305 --rc genhtml_branch_coverage=1 00:05:47.305 --rc genhtml_function_coverage=1 00:05:47.305 --rc genhtml_legend=1 00:05:47.305 --rc geninfo_all_blocks=1 00:05:47.305 --rc geninfo_unexecuted_blocks=1 00:05:47.305 00:05:47.305 ' 00:05:47.305 09:26:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:47.305 09:26:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:47.305 09:26:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:47.305 09:26:46 skip_rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:47.305 09:26:46 skip_rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:47.305 09:26:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.305 ************************************ 00:05:47.305 START TEST skip_rpc 00:05:47.305 ************************************ 00:05:47.305 09:26:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # test_skip_rpc 00:05:47.305 09:26:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3119090 00:05:47.305 09:26:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.305 09:26:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:47.305 09:26:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:47.305 [2024-10-07 09:26:46.875731] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:05:47.305 [2024-10-07 09:26:46.875793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3119090 ] 00:05:47.305 [2024-10-07 09:26:46.958415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.566 [2024-10-07 09:26:47.052895] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # local es=0 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@656 -- # rpc_cmd spdk_get_version 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@656 -- # es=1 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3119090 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' -z 3119090 ']' 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # kill -0 3119090 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # uname 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3119090 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3119090' 00:05:52.855 killing process with pid 3119090 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # kill 3119090 00:05:52.855 09:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@977 -- # wait 3119090 00:05:52.855 00:05:52.855 real 0m5.279s 00:05:52.855 user 0m5.031s 00:05:52.855 sys 0m0.299s 00:05:52.855 09:26:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:52.855 09:26:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.855 ************************************ 00:05:52.855 END TEST skip_rpc 00:05:52.855 ************************************ 00:05:52.855 09:26:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:52.855 09:26:52 skip_rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:52.855 09:26:52 skip_rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:52.855 09:26:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.855 ************************************ 00:05:52.855 START TEST skip_rpc_with_json 00:05:52.855 ************************************ 00:05:52.855 09:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # test_skip_rpc_with_json 00:05:52.855 09:26:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:52.855 09:26:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3120126 00:05:52.855 09:26:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.855 09:26:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3120126 00:05:52.855 09:26:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.855 09:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # '[' -z 3120126 ']' 00:05:52.855 09:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.855 09:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local max_retries=100 00:05:52.855 09:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.856 09:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@843 -- # xtrace_disable 00:05:52.856 09:26:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.856 [2024-10-07 09:26:52.229543] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:05:52.856 [2024-10-07 09:26:52.229591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3120126 ] 00:05:52.856 [2024-10-07 09:26:52.305834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.856 [2024-10-07 09:26:52.360480] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@867 -- # return 0 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.428 [2024-10-07 09:26:53.007347] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:53.428 request: 00:05:53.428 { 00:05:53.428 "trtype": "tcp", 00:05:53.428 "method": "nvmf_get_transports", 00:05:53.428 "req_id": 1 00:05:53.428 } 00:05:53.428 Got JSON-RPC error response 00:05:53.428 response: 00:05:53.428 { 00:05:53.428 "code": -19, 00:05:53.428 "message": "No such device" 00:05:53.428 } 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.428 [2024-10-07 09:26:53.019445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@564 -- # xtrace_disable 00:05:53.428 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.689 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:05:53.689 09:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:53.689 { 00:05:53.689 "subsystems": [ 00:05:53.689 { 00:05:53.689 "subsystem": "fsdev", 00:05:53.689 "config": [ 00:05:53.689 { 00:05:53.689 "method": "fsdev_set_opts", 00:05:53.689 "params": { 00:05:53.689 "fsdev_io_pool_size": 65535, 00:05:53.689 "fsdev_io_cache_size": 256 00:05:53.689 } 00:05:53.689 } 00:05:53.689 ] 00:05:53.689 }, 00:05:53.689 { 00:05:53.689 "subsystem": "vfio_user_target", 00:05:53.689 "config": null 00:05:53.689 }, 00:05:53.689 { 00:05:53.689 "subsystem": "keyring", 00:05:53.689 "config": [] 00:05:53.689 }, 00:05:53.689 { 00:05:53.689 "subsystem": "iobuf", 00:05:53.689 "config": [ 00:05:53.689 { 00:05:53.689 "method": "iobuf_set_options", 00:05:53.689 "params": { 00:05:53.689 "small_pool_count": 8192, 00:05:53.689 "large_pool_count": 1024, 00:05:53.689 "small_bufsize": 8192, 00:05:53.689 "large_bufsize": 135168 00:05:53.689 } 00:05:53.689 } 00:05:53.689 ] 00:05:53.689 }, 00:05:53.689 { 00:05:53.689 "subsystem": "sock", 00:05:53.689 "config": [ 00:05:53.689 { 00:05:53.689 "method": "sock_set_default_impl", 00:05:53.689 "params": { 00:05:53.689 "impl_name": "posix" 00:05:53.689 } 00:05:53.689 }, 00:05:53.689 { 00:05:53.689 "method": "sock_impl_set_options", 00:05:53.689 "params": { 00:05:53.689 "impl_name": "ssl", 00:05:53.689 "recv_buf_size": 4096, 00:05:53.689 "send_buf_size": 4096, 00:05:53.689 "enable_recv_pipe": true, 00:05:53.689 "enable_quickack": false, 00:05:53.689 "enable_placement_id": 0, 00:05:53.689 "enable_zerocopy_send_server": true, 00:05:53.689 "enable_zerocopy_send_client": false, 00:05:53.689 "zerocopy_threshold": 0, 00:05:53.690 "tls_version": 0, 00:05:53.690 "enable_ktls": false 00:05:53.690 } 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "method": "sock_impl_set_options", 00:05:53.690 "params": { 00:05:53.690 "impl_name": "posix", 00:05:53.690 "recv_buf_size": 2097152, 00:05:53.690 "send_buf_size": 2097152, 00:05:53.690 "enable_recv_pipe": true, 00:05:53.690 "enable_quickack": false, 00:05:53.690 "enable_placement_id": 0, 00:05:53.690 "enable_zerocopy_send_server": true, 00:05:53.690 "enable_zerocopy_send_client": false, 00:05:53.690 "zerocopy_threshold": 0, 00:05:53.690 "tls_version": 0, 00:05:53.690 "enable_ktls": false 00:05:53.690 } 00:05:53.690 } 00:05:53.690 ] 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "vmd", 00:05:53.690 "config": [] 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "accel", 00:05:53.690 "config": [ 00:05:53.690 { 00:05:53.690 "method": "accel_set_options", 00:05:53.690 "params": { 00:05:53.690 "small_cache_size": 128, 00:05:53.690 "large_cache_size": 16, 00:05:53.690 "task_count": 2048, 00:05:53.690 "sequence_count": 2048, 00:05:53.690 "buf_count": 2048 00:05:53.690 } 00:05:53.690 } 00:05:53.690 ] 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "bdev", 00:05:53.690 "config": [ 00:05:53.690 { 00:05:53.690 "method": "bdev_set_options", 00:05:53.690 "params": { 00:05:53.690 "bdev_io_pool_size": 65535, 00:05:53.690 "bdev_io_cache_size": 256, 00:05:53.690 "bdev_auto_examine": true, 00:05:53.690 "iobuf_small_cache_size": 128, 00:05:53.690 "iobuf_large_cache_size": 16 00:05:53.690 } 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "method": "bdev_raid_set_options", 00:05:53.690 "params": { 00:05:53.690 "process_window_size_kb": 1024, 00:05:53.690 "process_max_bandwidth_mb_sec": 0 00:05:53.690 } 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "method": "bdev_iscsi_set_options", 00:05:53.690 "params": { 00:05:53.690 "timeout_sec": 30 00:05:53.690 } 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "method": "bdev_nvme_set_options", 00:05:53.690 "params": { 00:05:53.690 "action_on_timeout": "none", 00:05:53.690 "timeout_us": 0, 00:05:53.690 "timeout_admin_us": 0, 00:05:53.690 "keep_alive_timeout_ms": 10000, 00:05:53.690 "arbitration_burst": 0, 00:05:53.690 "low_priority_weight": 0, 00:05:53.690 "medium_priority_weight": 0, 00:05:53.690 "high_priority_weight": 0, 00:05:53.690 "nvme_adminq_poll_period_us": 10000, 00:05:53.690 "nvme_ioq_poll_period_us": 0, 00:05:53.690 "io_queue_requests": 0, 00:05:53.690 "delay_cmd_submit": true, 00:05:53.690 "transport_retry_count": 4, 00:05:53.690 "bdev_retry_count": 3, 00:05:53.690 "transport_ack_timeout": 0, 00:05:53.690 "ctrlr_loss_timeout_sec": 0, 00:05:53.690 "reconnect_delay_sec": 0, 00:05:53.690 "fast_io_fail_timeout_sec": 0, 00:05:53.690 "disable_auto_failback": false, 00:05:53.690 "generate_uuids": false, 00:05:53.690 "transport_tos": 0, 00:05:53.690 "nvme_error_stat": false, 00:05:53.690 "rdma_srq_size": 0, 00:05:53.690 "io_path_stat": false, 00:05:53.690 "allow_accel_sequence": false, 00:05:53.690 "rdma_max_cq_size": 0, 00:05:53.690 "rdma_cm_event_timeout_ms": 0, 00:05:53.690 "dhchap_digests": [ 00:05:53.690 "sha256", 00:05:53.690 "sha384", 00:05:53.690 "sha512" 00:05:53.690 ], 00:05:53.690 "dhchap_dhgroups": [ 00:05:53.690 "null", 00:05:53.690 "ffdhe2048", 00:05:53.690 "ffdhe3072", 00:05:53.690 "ffdhe4096", 00:05:53.690 "ffdhe6144", 00:05:53.690 "ffdhe8192" 00:05:53.690 ] 00:05:53.690 } 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "method": "bdev_nvme_set_hotplug", 00:05:53.690 "params": { 00:05:53.690 "period_us": 100000, 00:05:53.690 "enable": false 00:05:53.690 } 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "method": "bdev_wait_for_examine" 00:05:53.690 } 00:05:53.690 ] 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "scsi", 00:05:53.690 "config": null 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "scheduler", 00:05:53.690 "config": [ 00:05:53.690 { 00:05:53.690 "method": "framework_set_scheduler", 00:05:53.690 "params": { 00:05:53.690 "name": "static" 00:05:53.690 } 00:05:53.690 } 00:05:53.690 ] 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "vhost_scsi", 00:05:53.690 "config": [] 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "vhost_blk", 00:05:53.690 "config": [] 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "ublk", 00:05:53.690 "config": [] 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "nbd", 00:05:53.690 "config": [] 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "nvmf", 00:05:53.690 "config": [ 00:05:53.690 { 00:05:53.690 "method": "nvmf_set_config", 00:05:53.690 "params": { 00:05:53.690 "discovery_filter": "match_any", 00:05:53.690 "admin_cmd_passthru": { 00:05:53.690 "identify_ctrlr": false 00:05:53.690 }, 00:05:53.690 "dhchap_digests": [ 00:05:53.690 "sha256", 00:05:53.690 "sha384", 00:05:53.690 "sha512" 00:05:53.690 ], 00:05:53.690 "dhchap_dhgroups": [ 00:05:53.690 "null", 00:05:53.690 "ffdhe2048", 00:05:53.690 "ffdhe3072", 00:05:53.690 "ffdhe4096", 00:05:53.690 "ffdhe6144", 00:05:53.690 "ffdhe8192" 00:05:53.690 ] 00:05:53.690 } 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "method": "nvmf_set_max_subsystems", 00:05:53.690 "params": { 00:05:53.690 "max_subsystems": 1024 00:05:53.690 } 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "method": "nvmf_set_crdt", 00:05:53.690 "params": { 00:05:53.690 "crdt1": 0, 00:05:53.690 "crdt2": 0, 00:05:53.690 "crdt3": 0 00:05:53.690 } 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "method": "nvmf_create_transport", 00:05:53.690 "params": { 00:05:53.690 "trtype": "TCP", 00:05:53.690 "max_queue_depth": 128, 00:05:53.690 "max_io_qpairs_per_ctrlr": 127, 00:05:53.690 "in_capsule_data_size": 4096, 00:05:53.690 "max_io_size": 131072, 00:05:53.690 "io_unit_size": 131072, 00:05:53.690 "max_aq_depth": 128, 00:05:53.690 "num_shared_buffers": 511, 00:05:53.690 "buf_cache_size": 4294967295, 00:05:53.690 "dif_insert_or_strip": false, 00:05:53.690 "zcopy": false, 00:05:53.690 "c2h_success": true, 00:05:53.690 "sock_priority": 0, 00:05:53.690 "abort_timeout_sec": 1, 00:05:53.690 "ack_timeout": 0, 00:05:53.690 "data_wr_pool_size": 0 00:05:53.690 } 00:05:53.690 } 00:05:53.690 ] 00:05:53.690 }, 00:05:53.690 { 00:05:53.690 "subsystem": "iscsi", 00:05:53.690 "config": [ 00:05:53.690 { 00:05:53.690 "method": "iscsi_set_options", 00:05:53.690 "params": { 00:05:53.690 "node_base": "iqn.2016-06.io.spdk", 00:05:53.690 "max_sessions": 128, 00:05:53.690 "max_connections_per_session": 2, 00:05:53.690 "max_queue_depth": 64, 00:05:53.690 "default_time2wait": 2, 00:05:53.690 "default_time2retain": 20, 00:05:53.690 "first_burst_length": 8192, 00:05:53.690 "immediate_data": true, 00:05:53.690 "allow_duplicated_isid": false, 00:05:53.690 "error_recovery_level": 0, 00:05:53.690 "nop_timeout": 60, 00:05:53.690 "nop_in_interval": 30, 00:05:53.690 "disable_chap": false, 00:05:53.690 "require_chap": false, 00:05:53.690 "mutual_chap": false, 00:05:53.690 "chap_group": 0, 00:05:53.690 "max_large_datain_per_connection": 64, 00:05:53.690 "max_r2t_per_connection": 4, 00:05:53.690 "pdu_pool_size": 36864, 00:05:53.690 "immediate_data_pool_size": 16384, 00:05:53.690 "data_out_pool_size": 2048 00:05:53.690 } 00:05:53.690 } 00:05:53.690 ] 00:05:53.690 } 00:05:53.690 ] 00:05:53.690 } 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3120126 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' -z 3120126 ']' 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # kill -0 3120126 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # uname 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3120126 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3120126' 00:05:53.690 killing process with pid 3120126 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # kill 3120126 00:05:53.690 09:26:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@977 -- # wait 3120126 00:05:53.951 09:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3120464 00:05:53.951 09:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:53.952 09:26:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3120464 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' -z 3120464 ']' 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # kill -0 3120464 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # uname 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3120464 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3120464' 00:05:59.253 killing process with pid 3120464 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # kill 3120464 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@977 -- # wait 3120464 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:59.253 00:05:59.253 real 0m6.574s 00:05:59.253 user 0m6.492s 00:05:59.253 sys 0m0.548s 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.253 ************************************ 00:05:59.253 END TEST skip_rpc_with_json 00:05:59.253 ************************************ 00:05:59.253 09:26:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.253 09:26:58 skip_rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:59.253 09:26:58 skip_rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:59.253 09:26:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.253 ************************************ 00:05:59.253 START TEST skip_rpc_with_delay 00:05:59.253 ************************************ 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # test_skip_rpc_with_delay 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # local es=0 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@656 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.253 [2024-10-07 09:26:58.891309] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:59.253 [2024-10-07 09:26:58.891394] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@656 -- # es=1 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:05:59.253 00:05:59.253 real 0m0.086s 00:05:59.253 user 0m0.059s 00:05:59.253 sys 0m0.027s 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # xtrace_disable 00:05:59.253 09:26:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:59.253 ************************************ 00:05:59.253 END TEST skip_rpc_with_delay 00:05:59.253 ************************************ 00:05:59.515 09:26:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:59.515 09:26:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:59.515 09:26:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:59.515 09:26:58 skip_rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:05:59.515 09:26:58 skip_rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:05:59.515 09:26:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.515 ************************************ 00:05:59.515 START TEST exit_on_failed_rpc_init 00:05:59.515 ************************************ 00:05:59.515 09:26:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # test_exit_on_failed_rpc_init 00:05:59.515 09:26:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3121536 00:05:59.515 09:26:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3121536 00:05:59.515 09:26:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.515 09:26:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # '[' -z 3121536 ']' 00:05:59.515 09:26:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.515 09:26:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local max_retries=100 00:05:59.515 09:26:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.515 09:26:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@843 -- # xtrace_disable 00:05:59.515 09:26:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.515 [2024-10-07 09:26:59.047074] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:05:59.515 [2024-10-07 09:26:59.047133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121536 ] 00:05:59.515 [2024-10-07 09:26:59.126367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.776 [2024-10-07 09:26:59.187156] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@867 -- # return 0 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # local es=0 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:00.347 09:26:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@656 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.347 [2024-10-07 09:26:59.896945] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:00.347 [2024-10-07 09:26:59.897004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121736 ] 00:06:00.347 [2024-10-07 09:26:59.972603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.608 [2024-10-07 09:27:00.039260] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.608 [2024-10-07 09:27:00.039316] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:00.608 [2024-10-07 09:27:00.039325] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:00.608 [2024-10-07 09:27:00.039332] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@656 -- # es=234 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # es=106 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@666 -- # case "$es" in 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@673 -- # es=1 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3121536 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' -z 3121536 ']' 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # kill -0 3121536 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # uname 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3121536 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3121536' 00:06:00.608 killing process with pid 3121536 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # kill 3121536 00:06:00.608 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@977 -- # wait 3121536 00:06:00.869 00:06:00.869 real 0m1.379s 00:06:00.869 user 0m1.618s 00:06:00.869 sys 0m0.410s 00:06:00.869 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:00.869 09:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.869 ************************************ 00:06:00.869 END TEST exit_on_failed_rpc_init 00:06:00.869 ************************************ 00:06:00.869 09:27:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:00.869 00:06:00.869 real 0m13.855s 00:06:00.869 user 0m13.417s 00:06:00.869 sys 0m1.632s 00:06:00.869 09:27:00 skip_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:00.869 09:27:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.869 ************************************ 00:06:00.869 END TEST skip_rpc 00:06:00.869 ************************************ 00:06:00.869 09:27:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.869 09:27:00 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:00.869 09:27:00 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:00.869 09:27:00 -- common/autotest_common.sh@10 -- # set +x 00:06:00.869 ************************************ 00:06:00.869 START TEST rpc_client 00:06:00.869 ************************************ 00:06:00.869 09:27:00 rpc_client -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:01.130 * Looking for test storage... 00:06:01.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@1626 -- # lcov --version 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.130 09:27:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:06:01.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.130 --rc genhtml_branch_coverage=1 00:06:01.130 --rc genhtml_function_coverage=1 00:06:01.130 --rc genhtml_legend=1 00:06:01.130 --rc geninfo_all_blocks=1 00:06:01.130 --rc geninfo_unexecuted_blocks=1 00:06:01.130 00:06:01.130 ' 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:06:01.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.130 --rc genhtml_branch_coverage=1 00:06:01.130 --rc genhtml_function_coverage=1 00:06:01.130 --rc genhtml_legend=1 00:06:01.130 --rc geninfo_all_blocks=1 00:06:01.130 --rc geninfo_unexecuted_blocks=1 00:06:01.130 00:06:01.130 ' 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:06:01.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.130 --rc genhtml_branch_coverage=1 00:06:01.130 --rc genhtml_function_coverage=1 00:06:01.130 --rc genhtml_legend=1 00:06:01.130 --rc geninfo_all_blocks=1 00:06:01.130 --rc geninfo_unexecuted_blocks=1 00:06:01.130 00:06:01.130 ' 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:06:01.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.130 --rc genhtml_branch_coverage=1 00:06:01.130 --rc genhtml_function_coverage=1 00:06:01.130 --rc genhtml_legend=1 00:06:01.130 --rc geninfo_all_blocks=1 00:06:01.130 --rc geninfo_unexecuted_blocks=1 00:06:01.130 00:06:01.130 ' 00:06:01.130 09:27:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:01.130 OK 00:06:01.130 09:27:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:01.130 00:06:01.130 real 0m0.263s 00:06:01.130 user 0m0.140s 00:06:01.130 sys 0m0.136s 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:01.130 09:27:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:01.130 ************************************ 00:06:01.130 END TEST rpc_client 00:06:01.130 ************************************ 00:06:01.392 09:27:00 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:01.392 09:27:00 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:01.392 09:27:00 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:01.392 09:27:00 -- common/autotest_common.sh@10 -- # set +x 00:06:01.392 ************************************ 00:06:01.392 START TEST json_config 00:06:01.392 ************************************ 00:06:01.392 09:27:00 json_config -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:01.392 09:27:00 json_config -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:06:01.392 09:27:00 json_config -- common/autotest_common.sh@1626 -- # lcov --version 00:06:01.392 09:27:00 json_config -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:06:01.392 09:27:01 json_config -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:06:01.392 09:27:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.392 09:27:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.392 09:27:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.392 09:27:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.392 09:27:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.392 09:27:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.392 09:27:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.392 09:27:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.392 09:27:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.393 09:27:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.393 09:27:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.393 09:27:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:01.393 09:27:01 json_config -- scripts/common.sh@345 -- # : 1 00:06:01.393 09:27:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.393 09:27:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.393 09:27:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:01.393 09:27:01 json_config -- scripts/common.sh@353 -- # local d=1 00:06:01.393 09:27:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.393 09:27:01 json_config -- scripts/common.sh@355 -- # echo 1 00:06:01.393 09:27:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.393 09:27:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:01.393 09:27:01 json_config -- scripts/common.sh@353 -- # local d=2 00:06:01.393 09:27:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.393 09:27:01 json_config -- scripts/common.sh@355 -- # echo 2 00:06:01.393 09:27:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.393 09:27:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.393 09:27:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.393 09:27:01 json_config -- scripts/common.sh@368 -- # return 0 00:06:01.393 09:27:01 json_config -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.393 09:27:01 json_config -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:06:01.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.393 --rc genhtml_branch_coverage=1 00:06:01.393 --rc genhtml_function_coverage=1 00:06:01.393 --rc genhtml_legend=1 00:06:01.393 --rc geninfo_all_blocks=1 00:06:01.393 --rc geninfo_unexecuted_blocks=1 00:06:01.393 00:06:01.393 ' 00:06:01.393 09:27:01 json_config -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:06:01.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.393 --rc genhtml_branch_coverage=1 00:06:01.393 --rc genhtml_function_coverage=1 00:06:01.393 --rc genhtml_legend=1 00:06:01.393 --rc geninfo_all_blocks=1 00:06:01.393 --rc geninfo_unexecuted_blocks=1 00:06:01.393 00:06:01.393 ' 00:06:01.393 09:27:01 json_config -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:06:01.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.393 --rc genhtml_branch_coverage=1 00:06:01.393 --rc genhtml_function_coverage=1 00:06:01.393 --rc genhtml_legend=1 00:06:01.393 --rc geninfo_all_blocks=1 00:06:01.393 --rc geninfo_unexecuted_blocks=1 00:06:01.393 00:06:01.393 ' 00:06:01.393 09:27:01 json_config -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:06:01.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.393 --rc genhtml_branch_coverage=1 00:06:01.393 --rc genhtml_function_coverage=1 00:06:01.393 --rc genhtml_legend=1 00:06:01.393 --rc geninfo_all_blocks=1 00:06:01.393 --rc geninfo_unexecuted_blocks=1 00:06:01.393 00:06:01.393 ' 00:06:01.393 09:27:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.393 09:27:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.654 09:27:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:01.654 09:27:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:01.654 09:27:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.654 09:27:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.654 09:27:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.654 09:27:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.654 09:27:01 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.654 09:27:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:01.654 09:27:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.654 09:27:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.654 09:27:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.654 09:27:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.654 09:27:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.654 09:27:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.655 09:27:01 json_config -- paths/export.sh@5 -- # export PATH 00:06:01.655 09:27:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.655 09:27:01 json_config -- nvmf/common.sh@51 -- # : 0 00:06:01.655 09:27:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:01.655 09:27:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:01.655 09:27:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.655 09:27:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.655 09:27:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.655 09:27:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:01.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:01.655 09:27:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:01.655 09:27:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:01.655 09:27:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:01.655 INFO: JSON configuration test init 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:01.655 09:27:01 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:01.655 09:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:01.655 09:27:01 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:01.655 09:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.655 09:27:01 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:01.655 09:27:01 json_config -- json_config/common.sh@9 -- # local app=target 00:06:01.655 09:27:01 json_config -- json_config/common.sh@10 -- # shift 00:06:01.655 09:27:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.655 09:27:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.655 09:27:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.655 09:27:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.655 09:27:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.655 09:27:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3122091 00:06:01.655 09:27:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.655 Waiting for target to run... 00:06:01.655 09:27:01 json_config -- json_config/common.sh@25 -- # waitforlisten 3122091 /var/tmp/spdk_tgt.sock 00:06:01.655 09:27:01 json_config -- common/autotest_common.sh@834 -- # '[' -z 3122091 ']' 00:06:01.655 09:27:01 json_config -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.655 09:27:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:01.655 09:27:01 json_config -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:01.655 09:27:01 json_config -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.655 09:27:01 json_config -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:01.655 09:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.655 [2024-10-07 09:27:01.149004] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:01.655 [2024-10-07 09:27:01.149084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3122091 ] 00:06:01.916 [2024-10-07 09:27:01.471275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.916 [2024-10-07 09:27:01.524804] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.490 09:27:01 json_config -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:02.490 09:27:01 json_config -- common/autotest_common.sh@867 -- # return 0 00:06:02.490 09:27:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:02.490 00:06:02.490 09:27:01 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:02.490 09:27:01 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:02.490 09:27:01 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:02.490 09:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.490 09:27:01 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:02.490 09:27:01 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:02.490 09:27:01 json_config -- common/autotest_common.sh@733 -- # xtrace_disable 00:06:02.490 09:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.490 09:27:01 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:02.490 09:27:01 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:02.490 09:27:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:03.061 09:27:02 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:03.061 09:27:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:03.061 09:27:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@54 -- # sort 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:03.061 09:27:02 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:03.061 09:27:02 json_config -- common/autotest_common.sh@733 -- # xtrace_disable 00:06:03.061 09:27:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:03.322 09:27:02 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:03.322 09:27:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.322 09:27:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.322 MallocForNvmf0 00:06:03.322 09:27:02 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.322 09:27:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.583 MallocForNvmf1 00:06:03.583 09:27:03 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.583 09:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.844 [2024-10-07 09:27:03.290260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.844 09:27:03 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:03.844 09:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.103 09:27:03 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.103 09:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.103 09:27:03 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.103 09:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.364 09:27:03 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:04.364 09:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:04.364 [2024-10-07 09:27:04.004442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:04.624 09:27:04 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:04.624 09:27:04 json_config -- common/autotest_common.sh@733 -- # xtrace_disable 00:06:04.624 09:27:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.624 09:27:04 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:04.624 09:27:04 json_config -- common/autotest_common.sh@733 -- # xtrace_disable 00:06:04.624 09:27:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.624 09:27:04 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:04.624 09:27:04 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:04.624 09:27:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:04.624 MallocBdevForConfigChangeCheck 00:06:04.884 09:27:04 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:04.884 09:27:04 json_config -- common/autotest_common.sh@733 -- # xtrace_disable 00:06:04.884 09:27:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.884 09:27:04 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:04.884 09:27:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.145 09:27:04 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:05.145 INFO: shutting down applications... 00:06:05.145 09:27:04 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:05.145 09:27:04 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:05.145 09:27:04 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:05.145 09:27:04 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:05.406 Calling clear_iscsi_subsystem 00:06:05.406 Calling clear_nvmf_subsystem 00:06:05.406 Calling clear_nbd_subsystem 00:06:05.406 Calling clear_ublk_subsystem 00:06:05.406 Calling clear_vhost_blk_subsystem 00:06:05.406 Calling clear_vhost_scsi_subsystem 00:06:05.406 Calling clear_bdev_subsystem 00:06:05.406 09:27:05 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:05.406 09:27:05 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:05.406 09:27:05 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:05.406 09:27:05 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.406 09:27:05 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:05.406 09:27:05 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:05.978 09:27:05 json_config -- json_config/json_config.sh@352 -- # break 00:06:05.978 09:27:05 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:05.978 09:27:05 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:05.978 09:27:05 json_config -- json_config/common.sh@31 -- # local app=target 00:06:05.978 09:27:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:05.978 09:27:05 json_config -- json_config/common.sh@35 -- # [[ -n 3122091 ]] 00:06:05.978 09:27:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3122091 00:06:05.978 09:27:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:05.978 09:27:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.978 09:27:05 json_config -- json_config/common.sh@41 -- # kill -0 3122091 00:06:05.978 09:27:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.550 09:27:05 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.550 09:27:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.550 09:27:05 json_config -- json_config/common.sh@41 -- # kill -0 3122091 00:06:06.550 09:27:05 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.550 09:27:05 json_config -- json_config/common.sh@43 -- # break 00:06:06.550 09:27:05 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.550 09:27:05 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.550 SPDK target shutdown done 00:06:06.550 09:27:05 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:06.550 INFO: relaunching applications... 00:06:06.550 09:27:05 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.550 09:27:05 json_config -- json_config/common.sh@9 -- # local app=target 00:06:06.550 09:27:05 json_config -- json_config/common.sh@10 -- # shift 00:06:06.550 09:27:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.550 09:27:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.550 09:27:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.550 09:27:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.550 09:27:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.550 09:27:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3123264 00:06:06.550 09:27:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.550 Waiting for target to run... 00:06:06.550 09:27:05 json_config -- json_config/common.sh@25 -- # waitforlisten 3123264 /var/tmp/spdk_tgt.sock 00:06:06.550 09:27:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.550 09:27:05 json_config -- common/autotest_common.sh@834 -- # '[' -z 3123264 ']' 00:06:06.550 09:27:05 json_config -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.550 09:27:05 json_config -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:06.550 09:27:05 json_config -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.550 09:27:05 json_config -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:06.550 09:27:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.550 [2024-10-07 09:27:05.974333] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:06.550 [2024-10-07 09:27:05.974404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3123264 ] 00:06:06.818 [2024-10-07 09:27:06.395314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.818 [2024-10-07 09:27:06.453531] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.391 [2024-10-07 09:27:06.952571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.391 [2024-10-07 09:27:06.984996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:07.391 09:27:07 json_config -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:07.391 09:27:07 json_config -- common/autotest_common.sh@867 -- # return 0 00:06:07.391 09:27:07 json_config -- json_config/common.sh@26 -- # echo '' 00:06:07.391 00:06:07.391 09:27:07 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:07.391 09:27:07 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:07.391 INFO: Checking if target configuration is the same... 00:06:07.391 09:27:07 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:07.391 09:27:07 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.391 09:27:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.391 + '[' 2 -ne 2 ']' 00:06:07.391 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:07.391 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:07.391 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:07.391 +++ basename /dev/fd/62 00:06:07.391 ++ mktemp /tmp/62.XXX 00:06:07.391 + tmp_file_1=/tmp/62.m0t 00:06:07.391 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.391 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.652 + tmp_file_2=/tmp/spdk_tgt_config.json.vFB 00:06:07.652 + ret=0 00:06:07.652 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.913 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.913 + diff -u /tmp/62.m0t /tmp/spdk_tgt_config.json.vFB 00:06:07.913 + echo 'INFO: JSON config files are the same' 00:06:07.913 INFO: JSON config files are the same 00:06:07.913 + rm /tmp/62.m0t /tmp/spdk_tgt_config.json.vFB 00:06:07.913 + exit 0 00:06:07.913 09:27:07 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:07.913 09:27:07 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:07.913 INFO: changing configuration and checking if this can be detected... 00:06:07.913 09:27:07 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.913 09:27:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:08.174 09:27:07 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.174 09:27:07 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:08.174 09:27:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:08.174 + '[' 2 -ne 2 ']' 00:06:08.174 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:08.174 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:08.174 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:08.174 +++ basename /dev/fd/62 00:06:08.174 ++ mktemp /tmp/62.XXX 00:06:08.174 + tmp_file_1=/tmp/62.8TY 00:06:08.174 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.174 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:08.174 + tmp_file_2=/tmp/spdk_tgt_config.json.xZ6 00:06:08.174 + ret=0 00:06:08.174 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:08.463 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:08.463 + diff -u /tmp/62.8TY /tmp/spdk_tgt_config.json.xZ6 00:06:08.463 + ret=1 00:06:08.463 + echo '=== Start of file: /tmp/62.8TY ===' 00:06:08.463 + cat /tmp/62.8TY 00:06:08.463 + echo '=== End of file: /tmp/62.8TY ===' 00:06:08.463 + echo '' 00:06:08.463 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xZ6 ===' 00:06:08.463 + cat /tmp/spdk_tgt_config.json.xZ6 00:06:08.463 + echo '=== End of file: /tmp/spdk_tgt_config.json.xZ6 ===' 00:06:08.463 + echo '' 00:06:08.463 + rm /tmp/62.8TY /tmp/spdk_tgt_config.json.xZ6 00:06:08.463 + exit 1 00:06:08.463 09:27:07 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:08.463 INFO: configuration change detected. 00:06:08.463 09:27:07 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:08.463 09:27:07 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:08.463 09:27:07 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:08.463 09:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.463 09:27:07 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:08.463 09:27:07 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:08.463 09:27:07 json_config -- json_config/json_config.sh@324 -- # [[ -n 3123264 ]] 00:06:08.463 09:27:07 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:08.463 09:27:07 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:08.463 09:27:07 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:08.463 09:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.463 09:27:07 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:08.463 09:27:07 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:08.463 09:27:08 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:08.463 09:27:08 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:08.463 09:27:08 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:08.463 09:27:08 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@733 -- # xtrace_disable 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.463 09:27:08 json_config -- json_config/json_config.sh@330 -- # killprocess 3123264 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@953 -- # '[' -z 3123264 ']' 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@957 -- # kill -0 3123264 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@958 -- # uname 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3123264 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3123264' 00:06:08.463 killing process with pid 3123264 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@972 -- # kill 3123264 00:06:08.463 09:27:08 json_config -- common/autotest_common.sh@977 -- # wait 3123264 00:06:08.724 09:27:08 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.986 09:27:08 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:08.986 09:27:08 json_config -- common/autotest_common.sh@733 -- # xtrace_disable 00:06:08.986 09:27:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.986 09:27:08 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:08.986 09:27:08 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:08.986 INFO: Success 00:06:08.986 00:06:08.986 real 0m7.597s 00:06:08.986 user 0m9.070s 00:06:08.986 sys 0m2.091s 00:06:08.986 09:27:08 json_config -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:08.986 09:27:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.986 ************************************ 00:06:08.986 END TEST json_config 00:06:08.986 ************************************ 00:06:08.986 09:27:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:08.986 09:27:08 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:08.986 09:27:08 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:08.986 09:27:08 -- common/autotest_common.sh@10 -- # set +x 00:06:08.986 ************************************ 00:06:08.986 START TEST json_config_extra_key 00:06:08.986 ************************************ 00:06:08.986 09:27:08 json_config_extra_key -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:08.986 09:27:08 json_config_extra_key -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:06:08.986 09:27:08 json_config_extra_key -- common/autotest_common.sh@1626 -- # lcov --version 00:06:08.986 09:27:08 json_config_extra_key -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:06:09.247 09:27:08 json_config_extra_key -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.247 09:27:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:09.248 09:27:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.248 09:27:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.248 09:27:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.248 09:27:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:06:09.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.248 --rc genhtml_branch_coverage=1 00:06:09.248 --rc genhtml_function_coverage=1 00:06:09.248 --rc genhtml_legend=1 00:06:09.248 --rc geninfo_all_blocks=1 00:06:09.248 --rc geninfo_unexecuted_blocks=1 00:06:09.248 00:06:09.248 ' 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:06:09.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.248 --rc genhtml_branch_coverage=1 00:06:09.248 --rc genhtml_function_coverage=1 00:06:09.248 --rc genhtml_legend=1 00:06:09.248 --rc geninfo_all_blocks=1 00:06:09.248 --rc geninfo_unexecuted_blocks=1 00:06:09.248 00:06:09.248 ' 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:06:09.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.248 --rc genhtml_branch_coverage=1 00:06:09.248 --rc genhtml_function_coverage=1 00:06:09.248 --rc genhtml_legend=1 00:06:09.248 --rc geninfo_all_blocks=1 00:06:09.248 --rc geninfo_unexecuted_blocks=1 00:06:09.248 00:06:09.248 ' 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:06:09.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.248 --rc genhtml_branch_coverage=1 00:06:09.248 --rc genhtml_function_coverage=1 00:06:09.248 --rc genhtml_legend=1 00:06:09.248 --rc geninfo_all_blocks=1 00:06:09.248 --rc geninfo_unexecuted_blocks=1 00:06:09.248 00:06:09.248 ' 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.248 09:27:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.248 09:27:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.248 09:27:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.248 09:27:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.248 09:27:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.248 09:27:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.248 09:27:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.248 09:27:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:09.248 09:27:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:09.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:09.248 09:27:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:09.248 INFO: launching applications... 00:06:09.248 09:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3124189 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:09.248 Waiting for target to run... 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3124189 /var/tmp/spdk_tgt.sock 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@834 -- # '[' -z 3124189 ']' 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:09.248 09:27:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:09.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:09.248 09:27:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:09.248 [2024-10-07 09:27:08.816990] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:09.248 [2024-10-07 09:27:08.817103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124189 ] 00:06:09.510 [2024-10-07 09:27:09.136691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.771 [2024-10-07 09:27:09.189471] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.032 09:27:09 json_config_extra_key -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:10.032 09:27:09 json_config_extra_key -- common/autotest_common.sh@867 -- # return 0 00:06:10.032 09:27:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:10.032 00:06:10.032 09:27:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:10.032 INFO: shutting down applications... 00:06:10.032 09:27:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:10.032 09:27:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:10.033 09:27:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:10.033 09:27:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3124189 ]] 00:06:10.033 09:27:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3124189 00:06:10.033 09:27:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:10.033 09:27:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.033 09:27:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3124189 00:06:10.033 09:27:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.605 09:27:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.605 09:27:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.605 09:27:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3124189 00:06:10.605 09:27:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.605 09:27:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:10.605 09:27:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.605 09:27:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.605 SPDK target shutdown done 00:06:10.605 09:27:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:10.605 Success 00:06:10.605 00:06:10.605 real 0m1.603s 00:06:10.605 user 0m1.174s 00:06:10.605 sys 0m0.462s 00:06:10.605 09:27:10 json_config_extra_key -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:10.605 09:27:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.605 ************************************ 00:06:10.605 END TEST json_config_extra_key 00:06:10.605 ************************************ 00:06:10.605 09:27:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:10.605 09:27:10 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:10.605 09:27:10 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:10.605 09:27:10 -- common/autotest_common.sh@10 -- # set +x 00:06:10.605 ************************************ 00:06:10.605 START TEST alias_rpc 00:06:10.605 ************************************ 00:06:10.605 09:27:10 alias_rpc -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:10.867 * Looking for test storage... 00:06:10.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@1626 -- # lcov --version 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.867 09:27:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:06:10.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.867 --rc genhtml_branch_coverage=1 00:06:10.867 --rc genhtml_function_coverage=1 00:06:10.867 --rc genhtml_legend=1 00:06:10.867 --rc geninfo_all_blocks=1 00:06:10.867 --rc geninfo_unexecuted_blocks=1 00:06:10.867 00:06:10.867 ' 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:06:10.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.867 --rc genhtml_branch_coverage=1 00:06:10.867 --rc genhtml_function_coverage=1 00:06:10.867 --rc genhtml_legend=1 00:06:10.867 --rc geninfo_all_blocks=1 00:06:10.867 --rc geninfo_unexecuted_blocks=1 00:06:10.867 00:06:10.867 ' 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:06:10.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.867 --rc genhtml_branch_coverage=1 00:06:10.867 --rc genhtml_function_coverage=1 00:06:10.867 --rc genhtml_legend=1 00:06:10.867 --rc geninfo_all_blocks=1 00:06:10.867 --rc geninfo_unexecuted_blocks=1 00:06:10.867 00:06:10.867 ' 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:06:10.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.867 --rc genhtml_branch_coverage=1 00:06:10.867 --rc genhtml_function_coverage=1 00:06:10.867 --rc genhtml_legend=1 00:06:10.867 --rc geninfo_all_blocks=1 00:06:10.867 --rc geninfo_unexecuted_blocks=1 00:06:10.867 00:06:10.867 ' 00:06:10.867 09:27:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:10.867 09:27:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3124862 00:06:10.867 09:27:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3124862 00:06:10.867 09:27:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@834 -- # '[' -z 3124862 ']' 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:10.867 09:27:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.867 [2024-10-07 09:27:10.489830] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:10.867 [2024-10-07 09:27:10.489903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124862 ] 00:06:11.128 [2024-10-07 09:27:10.571543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.128 [2024-10-07 09:27:10.646056] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.699 09:27:11 alias_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:11.699 09:27:11 alias_rpc -- common/autotest_common.sh@867 -- # return 0 00:06:11.699 09:27:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:11.959 09:27:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3124862 00:06:11.959 09:27:11 alias_rpc -- common/autotest_common.sh@953 -- # '[' -z 3124862 ']' 00:06:11.959 09:27:11 alias_rpc -- common/autotest_common.sh@957 -- # kill -0 3124862 00:06:11.959 09:27:11 alias_rpc -- common/autotest_common.sh@958 -- # uname 00:06:11.959 09:27:11 alias_rpc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:11.959 09:27:11 alias_rpc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3124862 00:06:11.959 09:27:11 alias_rpc -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:11.959 09:27:11 alias_rpc -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:11.959 09:27:11 alias_rpc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3124862' 00:06:11.959 killing process with pid 3124862 00:06:11.959 09:27:11 alias_rpc -- common/autotest_common.sh@972 -- # kill 3124862 00:06:11.959 09:27:11 alias_rpc -- common/autotest_common.sh@977 -- # wait 3124862 00:06:12.220 00:06:12.220 real 0m1.583s 00:06:12.220 user 0m1.698s 00:06:12.220 sys 0m0.485s 00:06:12.220 09:27:11 alias_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:12.220 09:27:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.220 ************************************ 00:06:12.220 END TEST alias_rpc 00:06:12.220 ************************************ 00:06:12.220 09:27:11 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:12.220 09:27:11 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:12.220 09:27:11 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:12.220 09:27:11 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:12.220 09:27:11 -- common/autotest_common.sh@10 -- # set +x 00:06:12.220 ************************************ 00:06:12.220 START TEST spdkcli_tcp 00:06:12.220 ************************************ 00:06:12.220 09:27:11 spdkcli_tcp -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:12.481 * Looking for test storage... 00:06:12.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:12.481 09:27:11 spdkcli_tcp -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:06:12.481 09:27:11 spdkcli_tcp -- common/autotest_common.sh@1626 -- # lcov --version 00:06:12.481 09:27:11 spdkcli_tcp -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:06:12.481 09:27:12 spdkcli_tcp -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.481 09:27:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:12.481 09:27:12 spdkcli_tcp -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.481 09:27:12 spdkcli_tcp -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:06:12.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.481 --rc genhtml_branch_coverage=1 00:06:12.481 --rc genhtml_function_coverage=1 00:06:12.481 --rc genhtml_legend=1 00:06:12.481 --rc geninfo_all_blocks=1 00:06:12.481 --rc geninfo_unexecuted_blocks=1 00:06:12.481 00:06:12.481 ' 00:06:12.481 09:27:12 spdkcli_tcp -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:06:12.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.481 --rc genhtml_branch_coverage=1 00:06:12.481 --rc genhtml_function_coverage=1 00:06:12.481 --rc genhtml_legend=1 00:06:12.481 --rc geninfo_all_blocks=1 00:06:12.481 --rc geninfo_unexecuted_blocks=1 00:06:12.481 00:06:12.481 ' 00:06:12.482 09:27:12 spdkcli_tcp -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:06:12.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.482 --rc genhtml_branch_coverage=1 00:06:12.482 --rc genhtml_function_coverage=1 00:06:12.482 --rc genhtml_legend=1 00:06:12.482 --rc geninfo_all_blocks=1 00:06:12.482 --rc geninfo_unexecuted_blocks=1 00:06:12.482 00:06:12.482 ' 00:06:12.482 09:27:12 spdkcli_tcp -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:06:12.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.482 --rc genhtml_branch_coverage=1 00:06:12.482 --rc genhtml_function_coverage=1 00:06:12.482 --rc genhtml_legend=1 00:06:12.482 --rc geninfo_all_blocks=1 00:06:12.482 --rc geninfo_unexecuted_blocks=1 00:06:12.482 00:06:12.482 ' 00:06:12.482 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:12.482 09:27:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:12.482 09:27:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:12.482 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:12.482 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:12.482 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:12.482 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:12.482 09:27:12 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:12.482 09:27:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.482 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3125308 00:06:12.482 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3125308 00:06:12.482 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:12.482 09:27:12 spdkcli_tcp -- common/autotest_common.sh@834 -- # '[' -z 3125308 ']' 00:06:12.482 09:27:12 spdkcli_tcp -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.482 09:27:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:12.482 09:27:12 spdkcli_tcp -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.482 09:27:12 spdkcli_tcp -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:12.482 09:27:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.742 [2024-10-07 09:27:12.151559] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:12.742 [2024-10-07 09:27:12.151643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125308 ] 00:06:12.742 [2024-10-07 09:27:12.229352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.742 [2024-10-07 09:27:12.286482] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.742 [2024-10-07 09:27:12.286482] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.312 09:27:12 spdkcli_tcp -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:13.312 09:27:12 spdkcli_tcp -- common/autotest_common.sh@867 -- # return 0 00:06:13.312 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3125327 00:06:13.312 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:13.312 09:27:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:13.573 [ 00:06:13.573 "bdev_malloc_delete", 00:06:13.573 "bdev_malloc_create", 00:06:13.573 "bdev_null_resize", 00:06:13.573 "bdev_null_delete", 00:06:13.573 "bdev_null_create", 00:06:13.573 "bdev_nvme_cuse_unregister", 00:06:13.573 "bdev_nvme_cuse_register", 00:06:13.573 "bdev_opal_new_user", 00:06:13.573 "bdev_opal_set_lock_state", 00:06:13.573 "bdev_opal_delete", 00:06:13.573 "bdev_opal_get_info", 00:06:13.573 "bdev_opal_create", 00:06:13.573 "bdev_nvme_opal_revert", 00:06:13.573 "bdev_nvme_opal_init", 00:06:13.573 "bdev_nvme_send_cmd", 00:06:13.573 "bdev_nvme_set_keys", 00:06:13.573 "bdev_nvme_get_path_iostat", 00:06:13.573 "bdev_nvme_get_mdns_discovery_info", 00:06:13.573 "bdev_nvme_stop_mdns_discovery", 00:06:13.573 "bdev_nvme_start_mdns_discovery", 00:06:13.573 "bdev_nvme_set_multipath_policy", 00:06:13.573 "bdev_nvme_set_preferred_path", 00:06:13.573 "bdev_nvme_get_io_paths", 00:06:13.573 "bdev_nvme_remove_error_injection", 00:06:13.573 "bdev_nvme_add_error_injection", 00:06:13.573 "bdev_nvme_get_discovery_info", 00:06:13.573 "bdev_nvme_stop_discovery", 00:06:13.573 "bdev_nvme_start_discovery", 00:06:13.573 "bdev_nvme_get_controller_health_info", 00:06:13.573 "bdev_nvme_disable_controller", 00:06:13.573 "bdev_nvme_enable_controller", 00:06:13.573 "bdev_nvme_reset_controller", 00:06:13.573 "bdev_nvme_get_transport_statistics", 00:06:13.573 "bdev_nvme_apply_firmware", 00:06:13.573 "bdev_nvme_detach_controller", 00:06:13.573 "bdev_nvme_get_controllers", 00:06:13.573 "bdev_nvme_attach_controller", 00:06:13.573 "bdev_nvme_set_hotplug", 00:06:13.573 "bdev_nvme_set_options", 00:06:13.574 "bdev_passthru_delete", 00:06:13.574 "bdev_passthru_create", 00:06:13.574 "bdev_lvol_set_parent_bdev", 00:06:13.574 "bdev_lvol_set_parent", 00:06:13.574 "bdev_lvol_check_shallow_copy", 00:06:13.574 "bdev_lvol_start_shallow_copy", 00:06:13.574 "bdev_lvol_grow_lvstore", 00:06:13.574 "bdev_lvol_get_lvols", 00:06:13.574 "bdev_lvol_get_lvstores", 00:06:13.574 "bdev_lvol_delete", 00:06:13.574 "bdev_lvol_set_read_only", 00:06:13.574 "bdev_lvol_resize", 00:06:13.574 "bdev_lvol_decouple_parent", 00:06:13.574 "bdev_lvol_inflate", 00:06:13.574 "bdev_lvol_rename", 00:06:13.574 "bdev_lvol_clone_bdev", 00:06:13.574 "bdev_lvol_clone", 00:06:13.574 "bdev_lvol_snapshot", 00:06:13.574 "bdev_lvol_create", 00:06:13.574 "bdev_lvol_delete_lvstore", 00:06:13.574 "bdev_lvol_rename_lvstore", 00:06:13.574 "bdev_lvol_create_lvstore", 00:06:13.574 "bdev_raid_set_options", 00:06:13.574 "bdev_raid_remove_base_bdev", 00:06:13.574 "bdev_raid_add_base_bdev", 00:06:13.574 "bdev_raid_delete", 00:06:13.574 "bdev_raid_create", 00:06:13.574 "bdev_raid_get_bdevs", 00:06:13.574 "bdev_error_inject_error", 00:06:13.574 "bdev_error_delete", 00:06:13.574 "bdev_error_create", 00:06:13.574 "bdev_split_delete", 00:06:13.574 "bdev_split_create", 00:06:13.574 "bdev_delay_delete", 00:06:13.574 "bdev_delay_create", 00:06:13.574 "bdev_delay_update_latency", 00:06:13.574 "bdev_zone_block_delete", 00:06:13.574 "bdev_zone_block_create", 00:06:13.574 "blobfs_create", 00:06:13.574 "blobfs_detect", 00:06:13.574 "blobfs_set_cache_size", 00:06:13.574 "bdev_aio_delete", 00:06:13.574 "bdev_aio_rescan", 00:06:13.574 "bdev_aio_create", 00:06:13.574 "bdev_ftl_set_property", 00:06:13.574 "bdev_ftl_get_properties", 00:06:13.574 "bdev_ftl_get_stats", 00:06:13.574 "bdev_ftl_unmap", 00:06:13.574 "bdev_ftl_unload", 00:06:13.574 "bdev_ftl_delete", 00:06:13.574 "bdev_ftl_load", 00:06:13.574 "bdev_ftl_create", 00:06:13.574 "bdev_virtio_attach_controller", 00:06:13.574 "bdev_virtio_scsi_get_devices", 00:06:13.574 "bdev_virtio_detach_controller", 00:06:13.574 "bdev_virtio_blk_set_hotplug", 00:06:13.574 "bdev_iscsi_delete", 00:06:13.574 "bdev_iscsi_create", 00:06:13.574 "bdev_iscsi_set_options", 00:06:13.574 "accel_error_inject_error", 00:06:13.574 "ioat_scan_accel_module", 00:06:13.574 "dsa_scan_accel_module", 00:06:13.574 "iaa_scan_accel_module", 00:06:13.574 "vfu_virtio_create_fs_endpoint", 00:06:13.574 "vfu_virtio_create_scsi_endpoint", 00:06:13.574 "vfu_virtio_scsi_remove_target", 00:06:13.574 "vfu_virtio_scsi_add_target", 00:06:13.574 "vfu_virtio_create_blk_endpoint", 00:06:13.574 "vfu_virtio_delete_endpoint", 00:06:13.574 "keyring_file_remove_key", 00:06:13.574 "keyring_file_add_key", 00:06:13.574 "keyring_linux_set_options", 00:06:13.574 "fsdev_aio_delete", 00:06:13.574 "fsdev_aio_create", 00:06:13.574 "iscsi_get_histogram", 00:06:13.574 "iscsi_enable_histogram", 00:06:13.574 "iscsi_set_options", 00:06:13.574 "iscsi_get_auth_groups", 00:06:13.574 "iscsi_auth_group_remove_secret", 00:06:13.574 "iscsi_auth_group_add_secret", 00:06:13.574 "iscsi_delete_auth_group", 00:06:13.574 "iscsi_create_auth_group", 00:06:13.574 "iscsi_set_discovery_auth", 00:06:13.574 "iscsi_get_options", 00:06:13.574 "iscsi_target_node_request_logout", 00:06:13.574 "iscsi_target_node_set_redirect", 00:06:13.574 "iscsi_target_node_set_auth", 00:06:13.574 "iscsi_target_node_add_lun", 00:06:13.574 "iscsi_get_stats", 00:06:13.574 "iscsi_get_connections", 00:06:13.574 "iscsi_portal_group_set_auth", 00:06:13.574 "iscsi_start_portal_group", 00:06:13.574 "iscsi_delete_portal_group", 00:06:13.574 "iscsi_create_portal_group", 00:06:13.574 "iscsi_get_portal_groups", 00:06:13.574 "iscsi_delete_target_node", 00:06:13.574 "iscsi_target_node_remove_pg_ig_maps", 00:06:13.574 "iscsi_target_node_add_pg_ig_maps", 00:06:13.574 "iscsi_create_target_node", 00:06:13.574 "iscsi_get_target_nodes", 00:06:13.574 "iscsi_delete_initiator_group", 00:06:13.574 "iscsi_initiator_group_remove_initiators", 00:06:13.574 "iscsi_initiator_group_add_initiators", 00:06:13.574 "iscsi_create_initiator_group", 00:06:13.574 "iscsi_get_initiator_groups", 00:06:13.574 "nvmf_set_crdt", 00:06:13.574 "nvmf_set_config", 00:06:13.574 "nvmf_set_max_subsystems", 00:06:13.574 "nvmf_stop_mdns_prr", 00:06:13.574 "nvmf_publish_mdns_prr", 00:06:13.574 "nvmf_subsystem_get_listeners", 00:06:13.574 "nvmf_subsystem_get_qpairs", 00:06:13.574 "nvmf_subsystem_get_controllers", 00:06:13.574 "nvmf_get_stats", 00:06:13.574 "nvmf_get_transports", 00:06:13.574 "nvmf_create_transport", 00:06:13.574 "nvmf_get_targets", 00:06:13.574 "nvmf_delete_target", 00:06:13.574 "nvmf_create_target", 00:06:13.574 "nvmf_subsystem_allow_any_host", 00:06:13.574 "nvmf_subsystem_set_keys", 00:06:13.574 "nvmf_subsystem_remove_host", 00:06:13.574 "nvmf_subsystem_add_host", 00:06:13.574 "nvmf_ns_remove_host", 00:06:13.574 "nvmf_ns_add_host", 00:06:13.574 "nvmf_subsystem_remove_ns", 00:06:13.574 "nvmf_subsystem_set_ns_ana_group", 00:06:13.574 "nvmf_subsystem_add_ns", 00:06:13.574 "nvmf_subsystem_listener_set_ana_state", 00:06:13.574 "nvmf_discovery_get_referrals", 00:06:13.574 "nvmf_discovery_remove_referral", 00:06:13.574 "nvmf_discovery_add_referral", 00:06:13.574 "nvmf_subsystem_remove_listener", 00:06:13.574 "nvmf_subsystem_add_listener", 00:06:13.574 "nvmf_delete_subsystem", 00:06:13.574 "nvmf_create_subsystem", 00:06:13.574 "nvmf_get_subsystems", 00:06:13.574 "env_dpdk_get_mem_stats", 00:06:13.574 "nbd_get_disks", 00:06:13.574 "nbd_stop_disk", 00:06:13.574 "nbd_start_disk", 00:06:13.574 "ublk_recover_disk", 00:06:13.574 "ublk_get_disks", 00:06:13.574 "ublk_stop_disk", 00:06:13.574 "ublk_start_disk", 00:06:13.574 "ublk_destroy_target", 00:06:13.574 "ublk_create_target", 00:06:13.574 "virtio_blk_create_transport", 00:06:13.574 "virtio_blk_get_transports", 00:06:13.574 "vhost_controller_set_coalescing", 00:06:13.574 "vhost_get_controllers", 00:06:13.574 "vhost_delete_controller", 00:06:13.574 "vhost_create_blk_controller", 00:06:13.574 "vhost_scsi_controller_remove_target", 00:06:13.574 "vhost_scsi_controller_add_target", 00:06:13.574 "vhost_start_scsi_controller", 00:06:13.574 "vhost_create_scsi_controller", 00:06:13.574 "thread_set_cpumask", 00:06:13.574 "scheduler_set_options", 00:06:13.574 "framework_get_governor", 00:06:13.574 "framework_get_scheduler", 00:06:13.574 "framework_set_scheduler", 00:06:13.574 "framework_get_reactors", 00:06:13.574 "thread_get_io_channels", 00:06:13.574 "thread_get_pollers", 00:06:13.574 "thread_get_stats", 00:06:13.574 "framework_monitor_context_switch", 00:06:13.574 "spdk_kill_instance", 00:06:13.574 "log_enable_timestamps", 00:06:13.574 "log_get_flags", 00:06:13.574 "log_clear_flag", 00:06:13.574 "log_set_flag", 00:06:13.574 "log_get_level", 00:06:13.574 "log_set_level", 00:06:13.574 "log_get_print_level", 00:06:13.574 "log_set_print_level", 00:06:13.574 "framework_enable_cpumask_locks", 00:06:13.574 "framework_disable_cpumask_locks", 00:06:13.574 "framework_wait_init", 00:06:13.574 "framework_start_init", 00:06:13.574 "scsi_get_devices", 00:06:13.574 "bdev_get_histogram", 00:06:13.574 "bdev_enable_histogram", 00:06:13.574 "bdev_set_qos_limit", 00:06:13.574 "bdev_set_qd_sampling_period", 00:06:13.574 "bdev_get_bdevs", 00:06:13.574 "bdev_reset_iostat", 00:06:13.574 "bdev_get_iostat", 00:06:13.574 "bdev_examine", 00:06:13.574 "bdev_wait_for_examine", 00:06:13.574 "bdev_set_options", 00:06:13.574 "accel_get_stats", 00:06:13.574 "accel_set_options", 00:06:13.574 "accel_set_driver", 00:06:13.574 "accel_crypto_key_destroy", 00:06:13.574 "accel_crypto_keys_get", 00:06:13.574 "accel_crypto_key_create", 00:06:13.574 "accel_assign_opc", 00:06:13.574 "accel_get_module_info", 00:06:13.574 "accel_get_opc_assignments", 00:06:13.574 "vmd_rescan", 00:06:13.574 "vmd_remove_device", 00:06:13.574 "vmd_enable", 00:06:13.574 "sock_get_default_impl", 00:06:13.574 "sock_set_default_impl", 00:06:13.574 "sock_impl_set_options", 00:06:13.574 "sock_impl_get_options", 00:06:13.574 "iobuf_get_stats", 00:06:13.574 "iobuf_set_options", 00:06:13.574 "keyring_get_keys", 00:06:13.574 "vfu_tgt_set_base_path", 00:06:13.574 "framework_get_pci_devices", 00:06:13.574 "framework_get_config", 00:06:13.574 "framework_get_subsystems", 00:06:13.574 "fsdev_set_opts", 00:06:13.574 "fsdev_get_opts", 00:06:13.574 "trace_get_info", 00:06:13.574 "trace_get_tpoint_group_mask", 00:06:13.574 "trace_disable_tpoint_group", 00:06:13.574 "trace_enable_tpoint_group", 00:06:13.574 "trace_clear_tpoint_mask", 00:06:13.574 "trace_set_tpoint_mask", 00:06:13.574 "notify_get_notifications", 00:06:13.574 "notify_get_types", 00:06:13.574 "spdk_get_version", 00:06:13.574 "rpc_get_methods" 00:06:13.574 ] 00:06:13.574 09:27:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:13.574 09:27:13 spdkcli_tcp -- common/autotest_common.sh@733 -- # xtrace_disable 00:06:13.574 09:27:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.574 09:27:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:13.574 09:27:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3125308 00:06:13.574 09:27:13 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' -z 3125308 ']' 00:06:13.574 09:27:13 spdkcli_tcp -- common/autotest_common.sh@957 -- # kill -0 3125308 00:06:13.574 09:27:13 spdkcli_tcp -- common/autotest_common.sh@958 -- # uname 00:06:13.574 09:27:13 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:13.574 09:27:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3125308 00:06:13.574 09:27:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:13.575 09:27:13 spdkcli_tcp -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:13.575 09:27:13 spdkcli_tcp -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3125308' 00:06:13.575 killing process with pid 3125308 00:06:13.575 09:27:13 spdkcli_tcp -- common/autotest_common.sh@972 -- # kill 3125308 00:06:13.575 09:27:13 spdkcli_tcp -- common/autotest_common.sh@977 -- # wait 3125308 00:06:13.836 00:06:13.836 real 0m1.583s 00:06:13.836 user 0m2.799s 00:06:13.836 sys 0m0.502s 00:06:13.836 09:27:13 spdkcli_tcp -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:13.836 09:27:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.836 ************************************ 00:06:13.836 END TEST spdkcli_tcp 00:06:13.836 ************************************ 00:06:13.836 09:27:13 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:13.836 09:27:13 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:13.836 09:27:13 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:13.836 09:27:13 -- common/autotest_common.sh@10 -- # set +x 00:06:14.097 ************************************ 00:06:14.097 START TEST dpdk_mem_utility 00:06:14.097 ************************************ 00:06:14.097 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.097 * Looking for test storage... 00:06:14.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:14.097 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:06:14.097 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@1626 -- # lcov --version 00:06:14.097 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:06:14.097 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.097 09:27:13 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:14.098 09:27:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:14.098 09:27:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.098 09:27:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:14.098 09:27:13 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.098 09:27:13 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.098 09:27:13 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.098 09:27:13 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:06:14.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.098 --rc genhtml_branch_coverage=1 00:06:14.098 --rc genhtml_function_coverage=1 00:06:14.098 --rc genhtml_legend=1 00:06:14.098 --rc geninfo_all_blocks=1 00:06:14.098 --rc geninfo_unexecuted_blocks=1 00:06:14.098 00:06:14.098 ' 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:06:14.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.098 --rc genhtml_branch_coverage=1 00:06:14.098 --rc genhtml_function_coverage=1 00:06:14.098 --rc genhtml_legend=1 00:06:14.098 --rc geninfo_all_blocks=1 00:06:14.098 --rc geninfo_unexecuted_blocks=1 00:06:14.098 00:06:14.098 ' 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:06:14.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.098 --rc genhtml_branch_coverage=1 00:06:14.098 --rc genhtml_function_coverage=1 00:06:14.098 --rc genhtml_legend=1 00:06:14.098 --rc geninfo_all_blocks=1 00:06:14.098 --rc geninfo_unexecuted_blocks=1 00:06:14.098 00:06:14.098 ' 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:06:14.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.098 --rc genhtml_branch_coverage=1 00:06:14.098 --rc genhtml_function_coverage=1 00:06:14.098 --rc genhtml_legend=1 00:06:14.098 --rc geninfo_all_blocks=1 00:06:14.098 --rc geninfo_unexecuted_blocks=1 00:06:14.098 00:06:14.098 ' 00:06:14.098 09:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:14.098 09:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3125730 00:06:14.098 09:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3125730 00:06:14.098 09:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # '[' -z 3125730 ']' 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:14.098 09:27:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.358 [2024-10-07 09:27:13.802665] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:14.358 [2024-10-07 09:27:13.802732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125730 ] 00:06:14.358 [2024-10-07 09:27:13.855796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.358 [2024-10-07 09:27:13.911540] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.619 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:14.619 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@867 -- # return 0 00:06:14.619 09:27:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:14.619 09:27:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:14.619 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:14.619 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.619 { 00:06:14.619 "filename": "/tmp/spdk_mem_dump.txt" 00:06:14.619 } 00:06:14.619 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:14.620 09:27:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:14.620 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:14.620 1 heaps totaling size 860.000000 MiB 00:06:14.620 size: 860.000000 MiB heap id: 0 00:06:14.620 end heaps---------- 00:06:14.620 9 mempools totaling size 642.649841 MiB 00:06:14.620 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:14.620 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:14.620 size: 92.545471 MiB name: bdev_io_3125730 00:06:14.620 size: 51.011292 MiB name: evtpool_3125730 00:06:14.620 size: 50.003479 MiB name: msgpool_3125730 00:06:14.620 size: 36.509338 MiB name: fsdev_io_3125730 00:06:14.620 size: 21.763794 MiB name: PDU_Pool 00:06:14.620 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:14.620 size: 0.026123 MiB name: Session_Pool 00:06:14.620 end mempools------- 00:06:14.620 6 memzones totaling size 4.142822 MiB 00:06:14.620 size: 1.000366 MiB name: RG_ring_0_3125730 00:06:14.620 size: 1.000366 MiB name: RG_ring_1_3125730 00:06:14.620 size: 1.000366 MiB name: RG_ring_4_3125730 00:06:14.620 size: 1.000366 MiB name: RG_ring_5_3125730 00:06:14.620 size: 0.125366 MiB name: RG_ring_2_3125730 00:06:14.620 size: 0.015991 MiB name: RG_ring_3_3125730 00:06:14.620 end memzones------- 00:06:14.620 09:27:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:14.620 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:06:14.620 list of free elements. size: 13.984680 MiB 00:06:14.620 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:14.620 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:14.620 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:14.620 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:14.620 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:14.620 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:14.620 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:14.620 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:14.620 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:14.620 element at address: 0x20001d800000 with size: 0.582886 MiB 00:06:14.620 element at address: 0x200003e00000 with size: 0.495422 MiB 00:06:14.620 element at address: 0x20000d800000 with size: 0.490723 MiB 00:06:14.620 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:14.620 element at address: 0x200007000000 with size: 0.481934 MiB 00:06:14.620 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:06:14.620 element at address: 0x200003a00000 with size: 0.355042 MiB 00:06:14.620 list of standard malloc elements. size: 199.218628 MiB 00:06:14.620 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:14.620 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:14.620 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:14.620 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:14.620 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:14.620 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:14.620 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:14.620 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:14.620 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:14.620 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:14.620 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:14.620 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:14.620 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:14.620 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:14.620 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:14.620 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200003aff940 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200003eff000 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:14.620 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:14.620 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:14.620 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:14.620 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:14.620 list of memzone associated elements. size: 646.796692 MiB 00:06:14.620 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:14.620 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:14.620 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:14.620 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:14.620 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:14.620 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3125730_0 00:06:14.620 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:14.620 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3125730_0 00:06:14.620 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:14.620 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3125730_0 00:06:14.620 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:14.620 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3125730_0 00:06:14.620 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:14.620 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:14.620 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:14.620 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:14.620 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:14.620 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3125730 00:06:14.620 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:14.620 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3125730 00:06:14.620 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:14.620 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3125730 00:06:14.620 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:14.620 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:14.620 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:14.620 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:14.620 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:14.620 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:14.620 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:14.620 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:14.620 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:14.620 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3125730 00:06:14.620 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:14.620 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3125730 00:06:14.620 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:14.620 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3125730 00:06:14.620 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:14.620 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3125730 00:06:14.620 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:06:14.620 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3125730 00:06:14.620 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:06:14.620 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3125730 00:06:14.620 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:14.620 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:14.620 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:14.620 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:14.620 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:14.620 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:14.620 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:06:14.620 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3125730 00:06:14.620 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:14.620 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:14.620 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:06:14.620 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:14.620 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:06:14.620 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3125730 00:06:14.620 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:06:14.620 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:14.620 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:14.620 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3125730 00:06:14.620 element at address: 0x200003affa00 with size: 0.000305 MiB 00:06:14.620 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3125730 00:06:14.620 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:06:14.621 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3125730 00:06:14.621 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:06:14.621 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:14.621 09:27:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:14.621 09:27:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3125730 00:06:14.621 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' -z 3125730 ']' 00:06:14.621 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@957 -- # kill -0 3125730 00:06:14.621 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # uname 00:06:14.621 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:14.621 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3125730 00:06:14.621 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:14.621 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:14.621 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3125730' 00:06:14.621 killing process with pid 3125730 00:06:14.621 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@972 -- # kill 3125730 00:06:14.621 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@977 -- # wait 3125730 00:06:14.881 00:06:14.881 real 0m0.957s 00:06:14.881 user 0m0.965s 00:06:14.881 sys 0m0.389s 00:06:14.881 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:14.881 09:27:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.881 ************************************ 00:06:14.881 END TEST dpdk_mem_utility 00:06:14.881 ************************************ 00:06:14.881 09:27:14 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:14.882 09:27:14 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:14.882 09:27:14 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:14.882 09:27:14 -- common/autotest_common.sh@10 -- # set +x 00:06:15.141 ************************************ 00:06:15.141 START TEST event 00:06:15.141 ************************************ 00:06:15.141 09:27:14 event -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:15.141 * Looking for test storage... 00:06:15.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:15.141 09:27:14 event -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:06:15.141 09:27:14 event -- common/autotest_common.sh@1626 -- # lcov --version 00:06:15.141 09:27:14 event -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:06:15.141 09:27:14 event -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:06:15.141 09:27:14 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.141 09:27:14 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.141 09:27:14 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.141 09:27:14 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.141 09:27:14 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.141 09:27:14 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.141 09:27:14 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.141 09:27:14 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.141 09:27:14 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.141 09:27:14 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.141 09:27:14 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.141 09:27:14 event -- scripts/common.sh@344 -- # case "$op" in 00:06:15.141 09:27:14 event -- scripts/common.sh@345 -- # : 1 00:06:15.141 09:27:14 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.141 09:27:14 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.141 09:27:14 event -- scripts/common.sh@365 -- # decimal 1 00:06:15.141 09:27:14 event -- scripts/common.sh@353 -- # local d=1 00:06:15.141 09:27:14 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.141 09:27:14 event -- scripts/common.sh@355 -- # echo 1 00:06:15.141 09:27:14 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.141 09:27:14 event -- scripts/common.sh@366 -- # decimal 2 00:06:15.141 09:27:14 event -- scripts/common.sh@353 -- # local d=2 00:06:15.141 09:27:14 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.141 09:27:14 event -- scripts/common.sh@355 -- # echo 2 00:06:15.141 09:27:14 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.141 09:27:14 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.141 09:27:14 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.141 09:27:14 event -- scripts/common.sh@368 -- # return 0 00:06:15.141 09:27:14 event -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.141 09:27:14 event -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:06:15.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.141 --rc genhtml_branch_coverage=1 00:06:15.141 --rc genhtml_function_coverage=1 00:06:15.141 --rc genhtml_legend=1 00:06:15.141 --rc geninfo_all_blocks=1 00:06:15.141 --rc geninfo_unexecuted_blocks=1 00:06:15.141 00:06:15.141 ' 00:06:15.141 09:27:14 event -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:06:15.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.142 --rc genhtml_branch_coverage=1 00:06:15.142 --rc genhtml_function_coverage=1 00:06:15.142 --rc genhtml_legend=1 00:06:15.142 --rc geninfo_all_blocks=1 00:06:15.142 --rc geninfo_unexecuted_blocks=1 00:06:15.142 00:06:15.142 ' 00:06:15.142 09:27:14 event -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:06:15.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.142 --rc genhtml_branch_coverage=1 00:06:15.142 --rc genhtml_function_coverage=1 00:06:15.142 --rc genhtml_legend=1 00:06:15.142 --rc geninfo_all_blocks=1 00:06:15.142 --rc geninfo_unexecuted_blocks=1 00:06:15.142 00:06:15.142 ' 00:06:15.142 09:27:14 event -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:06:15.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.142 --rc genhtml_branch_coverage=1 00:06:15.142 --rc genhtml_function_coverage=1 00:06:15.142 --rc genhtml_legend=1 00:06:15.142 --rc geninfo_all_blocks=1 00:06:15.142 --rc geninfo_unexecuted_blocks=1 00:06:15.142 00:06:15.142 ' 00:06:15.142 09:27:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:15.142 09:27:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:15.142 09:27:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:15.142 09:27:14 event -- common/autotest_common.sh@1104 -- # '[' 6 -le 1 ']' 00:06:15.142 09:27:14 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:15.142 09:27:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.400 ************************************ 00:06:15.400 START TEST event_perf 00:06:15.400 ************************************ 00:06:15.400 09:27:14 event.event_perf -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:15.400 Running I/O for 1 seconds...[2024-10-07 09:27:14.841841] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:15.400 [2024-10-07 09:27:14.841942] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125855 ] 00:06:15.400 [2024-10-07 09:27:14.924771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.400 [2024-10-07 09:27:14.998858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.401 [2024-10-07 09:27:14.999017] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.401 [2024-10-07 09:27:14.999170] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.401 Running I/O for 1 seconds...[2024-10-07 09:27:14.999171] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.777 00:06:16.778 lcore 0: 185298 00:06:16.778 lcore 1: 185300 00:06:16.778 lcore 2: 185301 00:06:16.778 lcore 3: 185301 00:06:16.778 done. 00:06:16.778 00:06:16.778 real 0m1.225s 00:06:16.778 user 0m4.127s 00:06:16.778 sys 0m0.096s 00:06:16.778 09:27:16 event.event_perf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:16.778 09:27:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.778 ************************************ 00:06:16.778 END TEST event_perf 00:06:16.778 ************************************ 00:06:16.778 09:27:16 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:16.778 09:27:16 event -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:06:16.778 09:27:16 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:16.778 09:27:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.778 ************************************ 00:06:16.778 START TEST event_reactor 00:06:16.778 ************************************ 00:06:16.778 09:27:16 event.event_reactor -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:16.778 [2024-10-07 09:27:16.143997] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:16.778 [2024-10-07 09:27:16.144104] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126164 ] 00:06:16.778 [2024-10-07 09:27:16.225029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.778 [2024-10-07 09:27:16.283372] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.716 test_start 00:06:17.716 oneshot 00:06:17.716 tick 100 00:06:17.716 tick 100 00:06:17.716 tick 250 00:06:17.716 tick 100 00:06:17.716 tick 100 00:06:17.716 tick 100 00:06:17.716 tick 250 00:06:17.716 tick 500 00:06:17.716 tick 100 00:06:17.716 tick 100 00:06:17.716 tick 250 00:06:17.716 tick 100 00:06:17.716 tick 100 00:06:17.716 test_end 00:06:17.716 00:06:17.716 real 0m1.205s 00:06:17.716 user 0m1.126s 00:06:17.716 sys 0m0.076s 00:06:17.716 09:27:17 event.event_reactor -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:17.717 09:27:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:17.717 ************************************ 00:06:17.717 END TEST event_reactor 00:06:17.717 ************************************ 00:06:17.717 09:27:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:17.717 09:27:17 event -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:06:17.717 09:27:17 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:17.717 09:27:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.977 ************************************ 00:06:17.977 START TEST event_reactor_perf 00:06:17.977 ************************************ 00:06:17.977 09:27:17 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:17.977 [2024-10-07 09:27:17.427855] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:17.977 [2024-10-07 09:27:17.427961] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126519 ] 00:06:17.977 [2024-10-07 09:27:17.510994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.977 [2024-10-07 09:27:17.568428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.358 test_start 00:06:19.358 test_end 00:06:19.358 Performance: 537517 events per second 00:06:19.358 00:06:19.358 real 0m1.206s 00:06:19.358 user 0m1.113s 00:06:19.358 sys 0m0.089s 00:06:19.358 09:27:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:19.359 09:27:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.359 ************************************ 00:06:19.359 END TEST event_reactor_perf 00:06:19.359 ************************************ 00:06:19.359 09:27:18 event -- event/event.sh@49 -- # uname -s 00:06:19.359 09:27:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:19.359 09:27:18 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:19.359 09:27:18 event -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:19.359 09:27:18 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:19.359 09:27:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.359 ************************************ 00:06:19.359 START TEST event_scheduler 00:06:19.359 ************************************ 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:19.359 * Looking for test storage... 00:06:19.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@1626 -- # lcov --version 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.359 09:27:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:06:19.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.359 --rc genhtml_branch_coverage=1 00:06:19.359 --rc genhtml_function_coverage=1 00:06:19.359 --rc genhtml_legend=1 00:06:19.359 --rc geninfo_all_blocks=1 00:06:19.359 --rc geninfo_unexecuted_blocks=1 00:06:19.359 00:06:19.359 ' 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:06:19.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.359 --rc genhtml_branch_coverage=1 00:06:19.359 --rc genhtml_function_coverage=1 00:06:19.359 --rc genhtml_legend=1 00:06:19.359 --rc geninfo_all_blocks=1 00:06:19.359 --rc geninfo_unexecuted_blocks=1 00:06:19.359 00:06:19.359 ' 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:06:19.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.359 --rc genhtml_branch_coverage=1 00:06:19.359 --rc genhtml_function_coverage=1 00:06:19.359 --rc genhtml_legend=1 00:06:19.359 --rc geninfo_all_blocks=1 00:06:19.359 --rc geninfo_unexecuted_blocks=1 00:06:19.359 00:06:19.359 ' 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:06:19.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.359 --rc genhtml_branch_coverage=1 00:06:19.359 --rc genhtml_function_coverage=1 00:06:19.359 --rc genhtml_legend=1 00:06:19.359 --rc geninfo_all_blocks=1 00:06:19.359 --rc geninfo_unexecuted_blocks=1 00:06:19.359 00:06:19.359 ' 00:06:19.359 09:27:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:19.359 09:27:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3126916 00:06:19.359 09:27:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.359 09:27:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3126916 00:06:19.359 09:27:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@834 -- # '[' -z 3126916 ']' 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:19.359 09:27:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.359 [2024-10-07 09:27:18.980669] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:19.359 [2024-10-07 09:27:18.980723] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126916 ] 00:06:19.619 [2024-10-07 09:27:19.061212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.619 [2024-10-07 09:27:19.140495] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.619 [2024-10-07 09:27:19.140676] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.620 [2024-10-07 09:27:19.140769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.620 [2024-10-07 09:27:19.140771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.191 09:27:19 event.event_scheduler -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:20.191 09:27:19 event.event_scheduler -- common/autotest_common.sh@867 -- # return 0 00:06:20.191 09:27:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:20.191 09:27:19 event.event_scheduler -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.191 09:27:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.191 [2024-10-07 09:27:19.787343] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:20.191 [2024-10-07 09:27:19.787362] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:20.191 [2024-10-07 09:27:19.787372] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:20.191 [2024-10-07 09:27:19.787378] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:20.191 [2024-10-07 09:27:19.787384] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:20.191 09:27:19 event.event_scheduler -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:20.191 09:27:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:20.191 09:27:19 event.event_scheduler -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.191 09:27:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.451 [2024-10-07 09:27:19.854444] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:20.451 09:27:19 event.event_scheduler -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:20.451 09:27:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:20.451 09:27:19 event.event_scheduler -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:20.451 09:27:19 event.event_scheduler -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:20.451 09:27:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.451 ************************************ 00:06:20.451 START TEST scheduler_create_thread 00:06:20.451 ************************************ 00:06:20.451 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # scheduler_create_thread 00:06:20.451 09:27:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:20.451 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.451 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.451 2 00:06:20.451 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 3 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 4 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 5 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 6 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 7 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 8 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 9 00:06:20.452 09:27:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:20.452 09:27:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:20.452 09:27:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:20.452 09:27:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.837 10 00:06:21.837 09:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:21.837 09:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:21.837 09:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:21.837 09:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.778 09:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:22.778 09:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:22.778 09:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:22.778 09:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:22.778 09:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.349 09:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:23.349 09:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:23.349 09:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:23.349 09:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.919 09:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:23.919 09:27:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:23.919 09:27:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:23.919 09:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:23.919 09:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.490 09:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:24.490 00:06:24.490 real 0m4.215s 00:06:24.490 user 0m0.028s 00:06:24.490 sys 0m0.004s 00:06:24.490 09:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:24.490 09:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.490 ************************************ 00:06:24.490 END TEST scheduler_create_thread 00:06:24.490 ************************************ 00:06:24.490 09:27:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:24.490 09:27:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3126916 00:06:24.490 09:27:24 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' -z 3126916 ']' 00:06:24.490 09:27:24 event.event_scheduler -- common/autotest_common.sh@957 -- # kill -0 3126916 00:06:24.490 09:27:24 event.event_scheduler -- common/autotest_common.sh@958 -- # uname 00:06:24.750 09:27:24 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:24.750 09:27:24 event.event_scheduler -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3126916 00:06:24.750 09:27:24 event.event_scheduler -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:06:24.750 09:27:24 event.event_scheduler -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:06:24.750 09:27:24 event.event_scheduler -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3126916' 00:06:24.750 killing process with pid 3126916 00:06:24.750 09:27:24 event.event_scheduler -- common/autotest_common.sh@972 -- # kill 3126916 00:06:24.750 09:27:24 event.event_scheduler -- common/autotest_common.sh@977 -- # wait 3126916 00:06:24.750 [2024-10-07 09:27:24.386560] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:25.010 00:06:25.010 real 0m5.877s 00:06:25.010 user 0m13.343s 00:06:25.010 sys 0m0.440s 00:06:25.010 09:27:24 event.event_scheduler -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:25.010 09:27:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.010 ************************************ 00:06:25.010 END TEST event_scheduler 00:06:25.010 ************************************ 00:06:25.010 09:27:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:25.010 09:27:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:25.010 09:27:24 event -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:25.010 09:27:24 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:25.010 09:27:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.010 ************************************ 00:06:25.010 START TEST app_repeat 00:06:25.010 ************************************ 00:06:25.010 09:27:24 event.app_repeat -- common/autotest_common.sh@1128 -- # app_repeat_test 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3127982 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:25.010 09:27:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3127982' 00:06:25.011 Process app_repeat pid: 3127982 00:06:25.011 09:27:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.011 09:27:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:25.011 spdk_app_start Round 0 00:06:25.011 09:27:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3127982 /var/tmp/spdk-nbd.sock 00:06:25.011 09:27:24 event.app_repeat -- common/autotest_common.sh@834 -- # '[' -z 3127982 ']' 00:06:25.011 09:27:24 event.app_repeat -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.011 09:27:24 event.app_repeat -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:25.011 09:27:24 event.app_repeat -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.011 09:27:24 event.app_repeat -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:25.011 09:27:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.271 [2024-10-07 09:27:24.691437] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:25.271 [2024-10-07 09:27:24.691505] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127982 ] 00:06:25.271 [2024-10-07 09:27:24.770381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.271 [2024-10-07 09:27:24.829613] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.271 [2024-10-07 09:27:24.829614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.211 09:27:25 event.app_repeat -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:26.211 09:27:25 event.app_repeat -- common/autotest_common.sh@867 -- # return 0 00:06:26.211 09:27:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.211 Malloc0 00:06:26.211 09:27:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.211 Malloc1 00:06:26.211 09:27:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.211 09:27:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.211 09:27:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.211 09:27:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.211 09:27:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.211 09:27:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.211 09:27:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.472 09:27:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.472 09:27:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.472 09:27:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.472 09:27:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.472 09:27:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.472 09:27:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.472 09:27:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.472 09:27:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.472 09:27:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.472 /dev/nbd0 00:06:26.472 09:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.472 09:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.472 1+0 records in 00:06:26.472 1+0 records out 00:06:26.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002945 s, 13.9 MB/s 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:06:26.472 09:27:26 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:06:26.472 09:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.472 09:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.472 09:27:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.732 /dev/nbd1 00:06:26.732 09:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.732 09:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.732 1+0 records in 00:06:26.732 1+0 records out 00:06:26.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268025 s, 15.3 MB/s 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:06:26.732 09:27:26 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:06:26.732 09:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.732 09:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.732 09:27:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.732 09:27:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.732 09:27:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.993 { 00:06:26.993 "nbd_device": "/dev/nbd0", 00:06:26.993 "bdev_name": "Malloc0" 00:06:26.993 }, 00:06:26.993 { 00:06:26.993 "nbd_device": "/dev/nbd1", 00:06:26.993 "bdev_name": "Malloc1" 00:06:26.993 } 00:06:26.993 ]' 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.993 { 00:06:26.993 "nbd_device": "/dev/nbd0", 00:06:26.993 "bdev_name": "Malloc0" 00:06:26.993 }, 00:06:26.993 { 00:06:26.993 "nbd_device": "/dev/nbd1", 00:06:26.993 "bdev_name": "Malloc1" 00:06:26.993 } 00:06:26.993 ]' 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.993 /dev/nbd1' 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.993 /dev/nbd1' 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.993 256+0 records in 00:06:26.993 256+0 records out 00:06:26.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127341 s, 82.3 MB/s 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.993 256+0 records in 00:06:26.993 256+0 records out 00:06:26.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120314 s, 87.2 MB/s 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.993 256+0 records in 00:06:26.993 256+0 records out 00:06:26.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134898 s, 77.7 MB/s 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.993 09:27:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.253 09:27:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.253 09:27:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.253 09:27:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.253 09:27:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.253 09:27:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.253 09:27:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.253 09:27:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.253 09:27:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.253 09:27:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.253 09:27:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.513 09:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.773 09:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.773 09:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.773 09:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.774 09:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.774 09:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.774 09:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.774 09:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.774 09:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.774 09:27:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.774 09:27:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.774 09:27:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.774 09:27:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.774 09:27:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.033 09:27:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.033 [2024-10-07 09:27:27.562366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.033 [2024-10-07 09:27:27.613730] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.034 [2024-10-07 09:27:27.613731] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.034 [2024-10-07 09:27:27.642748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.034 [2024-10-07 09:27:27.642779] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.331 09:27:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.331 09:27:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:31.331 spdk_app_start Round 1 00:06:31.331 09:27:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3127982 /var/tmp/spdk-nbd.sock 00:06:31.331 09:27:30 event.app_repeat -- common/autotest_common.sh@834 -- # '[' -z 3127982 ']' 00:06:31.331 09:27:30 event.app_repeat -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.331 09:27:30 event.app_repeat -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:31.331 09:27:30 event.app_repeat -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.331 09:27:30 event.app_repeat -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:31.331 09:27:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.331 09:27:30 event.app_repeat -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:31.331 09:27:30 event.app_repeat -- common/autotest_common.sh@867 -- # return 0 00:06:31.331 09:27:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.331 Malloc0 00:06:31.331 09:27:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.592 Malloc1 00:06:31.592 09:27:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.592 /dev/nbd0 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.592 1+0 records in 00:06:31.592 1+0 records out 00:06:31.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274924 s, 14.9 MB/s 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:06:31.592 09:27:31 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.592 09:27:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.853 /dev/nbd1 00:06:31.853 09:27:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.853 09:27:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.853 1+0 records in 00:06:31.853 1+0 records out 00:06:31.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300925 s, 13.6 MB/s 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:06:31.853 09:27:31 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:06:31.853 09:27:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.853 09:27:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.853 09:27:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.853 09:27:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.853 09:27:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.113 09:27:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.113 { 00:06:32.113 "nbd_device": "/dev/nbd0", 00:06:32.113 "bdev_name": "Malloc0" 00:06:32.113 }, 00:06:32.113 { 00:06:32.113 "nbd_device": "/dev/nbd1", 00:06:32.113 "bdev_name": "Malloc1" 00:06:32.113 } 00:06:32.113 ]' 00:06:32.113 09:27:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.113 { 00:06:32.113 "nbd_device": "/dev/nbd0", 00:06:32.113 "bdev_name": "Malloc0" 00:06:32.113 }, 00:06:32.113 { 00:06:32.114 "nbd_device": "/dev/nbd1", 00:06:32.114 "bdev_name": "Malloc1" 00:06:32.114 } 00:06:32.114 ]' 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.114 /dev/nbd1' 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.114 /dev/nbd1' 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.114 256+0 records in 00:06:32.114 256+0 records out 00:06:32.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128179 s, 81.8 MB/s 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.114 256+0 records in 00:06:32.114 256+0 records out 00:06:32.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123138 s, 85.2 MB/s 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.114 09:27:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.375 256+0 records in 00:06:32.375 256+0 records out 00:06:32.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131965 s, 79.5 MB/s 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.375 09:27:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.637 09:27:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.899 09:27:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.899 09:27:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.159 09:27:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.159 [2024-10-07 09:27:32.696336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.159 [2024-10-07 09:27:32.748650] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.159 [2024-10-07 09:27:32.748680] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.159 [2024-10-07 09:27:32.778371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.159 [2024-10-07 09:27:32.778401] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.459 09:27:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.459 09:27:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:36.459 spdk_app_start Round 2 00:06:36.459 09:27:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3127982 /var/tmp/spdk-nbd.sock 00:06:36.459 09:27:35 event.app_repeat -- common/autotest_common.sh@834 -- # '[' -z 3127982 ']' 00:06:36.459 09:27:35 event.app_repeat -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.459 09:27:35 event.app_repeat -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:36.459 09:27:35 event.app_repeat -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.459 09:27:35 event.app_repeat -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:36.459 09:27:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.459 09:27:35 event.app_repeat -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:36.459 09:27:35 event.app_repeat -- common/autotest_common.sh@867 -- # return 0 00:06:36.459 09:27:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.459 Malloc0 00:06:36.459 09:27:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.727 Malloc1 00:06:36.727 09:27:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.727 /dev/nbd0 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.727 1+0 records in 00:06:36.727 1+0 records out 00:06:36.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219524 s, 18.7 MB/s 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:06:36.727 09:27:36 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.727 09:27:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.989 /dev/nbd1 00:06:36.989 09:27:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.989 09:27:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.989 1+0 records in 00:06:36.989 1+0 records out 00:06:36.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284094 s, 14.4 MB/s 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:06:36.989 09:27:36 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:06:36.989 09:27:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.989 09:27:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.989 09:27:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.989 09:27:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.989 09:27:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.249 09:27:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.249 { 00:06:37.249 "nbd_device": "/dev/nbd0", 00:06:37.249 "bdev_name": "Malloc0" 00:06:37.249 }, 00:06:37.249 { 00:06:37.249 "nbd_device": "/dev/nbd1", 00:06:37.249 "bdev_name": "Malloc1" 00:06:37.249 } 00:06:37.249 ]' 00:06:37.249 09:27:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.249 { 00:06:37.249 "nbd_device": "/dev/nbd0", 00:06:37.249 "bdev_name": "Malloc0" 00:06:37.249 }, 00:06:37.249 { 00:06:37.249 "nbd_device": "/dev/nbd1", 00:06:37.249 "bdev_name": "Malloc1" 00:06:37.249 } 00:06:37.249 ]' 00:06:37.249 09:27:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.250 /dev/nbd1' 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.250 /dev/nbd1' 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.250 256+0 records in 00:06:37.250 256+0 records out 00:06:37.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128066 s, 81.9 MB/s 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.250 256+0 records in 00:06:37.250 256+0 records out 00:06:37.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124215 s, 84.4 MB/s 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.250 256+0 records in 00:06:37.250 256+0 records out 00:06:37.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130986 s, 80.1 MB/s 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.250 09:27:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.511 09:27:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.511 09:27:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.511 09:27:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.511 09:27:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.511 09:27:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.511 09:27:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.511 09:27:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.511 09:27:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.511 09:27:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.511 09:27:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.511 09:27:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.511 09:27:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.511 09:27:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.511 09:27:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.511 09:27:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.511 09:27:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.511 09:27:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.511 09:27:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.511 09:27:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.511 09:27:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.773 09:27:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.035 09:27:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.035 09:27:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.296 09:27:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.296 [2024-10-07 09:27:37.853307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.296 [2024-10-07 09:27:37.906003] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.296 [2024-10-07 09:27:37.906003] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.296 [2024-10-07 09:27:37.935107] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.296 [2024-10-07 09:27:37.935136] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.607 09:27:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3127982 /var/tmp/spdk-nbd.sock 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@834 -- # '[' -z 3127982 ']' 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@867 -- # return 0 00:06:41.607 09:27:40 event.app_repeat -- event/event.sh@39 -- # killprocess 3127982 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@953 -- # '[' -z 3127982 ']' 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@957 -- # kill -0 3127982 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@958 -- # uname 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:41.607 09:27:40 event.app_repeat -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3127982 00:06:41.607 09:27:41 event.app_repeat -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:41.607 09:27:41 event.app_repeat -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:41.607 09:27:41 event.app_repeat -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3127982' 00:06:41.607 killing process with pid 3127982 00:06:41.607 09:27:41 event.app_repeat -- common/autotest_common.sh@972 -- # kill 3127982 00:06:41.607 09:27:41 event.app_repeat -- common/autotest_common.sh@977 -- # wait 3127982 00:06:41.607 spdk_app_start is called in Round 0. 00:06:41.607 Shutdown signal received, stop current app iteration 00:06:41.607 Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 reinitialization... 00:06:41.607 spdk_app_start is called in Round 1. 00:06:41.607 Shutdown signal received, stop current app iteration 00:06:41.607 Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 reinitialization... 00:06:41.607 spdk_app_start is called in Round 2. 00:06:41.607 Shutdown signal received, stop current app iteration 00:06:41.607 Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 reinitialization... 00:06:41.607 spdk_app_start is called in Round 3. 00:06:41.607 Shutdown signal received, stop current app iteration 00:06:41.607 09:27:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:41.607 09:27:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:41.607 00:06:41.607 real 0m16.459s 00:06:41.607 user 0m35.991s 00:06:41.607 sys 0m2.371s 00:06:41.607 09:27:41 event.app_repeat -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:41.607 09:27:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.607 ************************************ 00:06:41.607 END TEST app_repeat 00:06:41.607 ************************************ 00:06:41.607 09:27:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:41.607 09:27:41 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.607 09:27:41 event -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:41.607 09:27:41 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:41.607 09:27:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.607 ************************************ 00:06:41.607 START TEST cpu_locks 00:06:41.607 ************************************ 00:06:41.607 09:27:41 event.cpu_locks -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.867 * Looking for test storage... 00:06:41.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1626 -- # lcov --version 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.867 09:27:41 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:06:41.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.867 --rc genhtml_branch_coverage=1 00:06:41.867 --rc genhtml_function_coverage=1 00:06:41.867 --rc genhtml_legend=1 00:06:41.867 --rc geninfo_all_blocks=1 00:06:41.867 --rc geninfo_unexecuted_blocks=1 00:06:41.867 00:06:41.867 ' 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:06:41.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.867 --rc genhtml_branch_coverage=1 00:06:41.867 --rc genhtml_function_coverage=1 00:06:41.867 --rc genhtml_legend=1 00:06:41.867 --rc geninfo_all_blocks=1 00:06:41.867 --rc geninfo_unexecuted_blocks=1 00:06:41.867 00:06:41.867 ' 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:06:41.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.867 --rc genhtml_branch_coverage=1 00:06:41.867 --rc genhtml_function_coverage=1 00:06:41.867 --rc genhtml_legend=1 00:06:41.867 --rc geninfo_all_blocks=1 00:06:41.867 --rc geninfo_unexecuted_blocks=1 00:06:41.867 00:06:41.867 ' 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:06:41.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.867 --rc genhtml_branch_coverage=1 00:06:41.867 --rc genhtml_function_coverage=1 00:06:41.867 --rc genhtml_legend=1 00:06:41.867 --rc geninfo_all_blocks=1 00:06:41.867 --rc geninfo_unexecuted_blocks=1 00:06:41.867 00:06:41.867 ' 00:06:41.867 09:27:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:41.867 09:27:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:41.867 09:27:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:41.867 09:27:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:41.867 09:27:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.867 ************************************ 00:06:41.867 START TEST default_locks 00:06:41.867 ************************************ 00:06:41.867 09:27:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # default_locks 00:06:41.867 09:27:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3131584 00:06:41.867 09:27:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3131584 00:06:41.867 09:27:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.867 09:27:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # '[' -z 3131584 ']' 00:06:41.867 09:27:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.867 09:27:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:41.867 09:27:41 event.cpu_locks.default_locks -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.867 09:27:41 event.cpu_locks.default_locks -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:41.867 09:27:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.867 [2024-10-07 09:27:41.520185] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:41.867 [2024-10-07 09:27:41.520234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131584 ] 00:06:42.127 [2024-10-07 09:27:41.598487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.127 [2024-10-07 09:27:41.655195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.697 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:42.697 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@867 -- # return 0 00:06:42.697 09:27:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3131584 00:06:42.697 09:27:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.697 09:27:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3131584 00:06:43.265 lslocks: write error 00:06:43.265 09:27:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3131584 00:06:43.265 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' -z 3131584 ']' 00:06:43.265 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # kill -0 3131584 00:06:43.265 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # uname 00:06:43.265 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:43.265 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3131584 00:06:43.525 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:43.525 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:43.525 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3131584' 00:06:43.525 killing process with pid 3131584 00:06:43.525 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # kill 3131584 00:06:43.525 09:27:42 event.cpu_locks.default_locks -- common/autotest_common.sh@977 -- # wait 3131584 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3131584 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # local es=0 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # valid_exec_arg waitforlisten 3131584 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # local arg=waitforlisten 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # type -t waitforlisten 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@656 -- # waitforlisten 3131584 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # '[' -z 3131584 ']' 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 849: kill: (3131584) - No such process 00:06:43.525 ERROR: process (pid: 3131584) is no longer running 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@867 -- # return 1 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@656 -- # es=1 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.525 00:06:43.525 real 0m1.680s 00:06:43.525 user 0m1.801s 00:06:43.525 sys 0m0.602s 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:43.525 09:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.525 ************************************ 00:06:43.525 END TEST default_locks 00:06:43.525 ************************************ 00:06:43.525 09:27:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:43.525 09:27:43 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:43.525 09:27:43 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:43.525 09:27:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.850 ************************************ 00:06:43.850 START TEST default_locks_via_rpc 00:06:43.850 ************************************ 00:06:43.850 09:27:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # default_locks_via_rpc 00:06:43.850 09:27:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3131951 00:06:43.850 09:27:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3131951 00:06:43.850 09:27:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.850 09:27:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # '[' -z 3131951 ']' 00:06:43.850 09:27:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.850 09:27:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:43.850 09:27:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.850 09:27:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:43.850 09:27:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.850 [2024-10-07 09:27:43.279037] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:43.850 [2024-10-07 09:27:43.279088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131951 ] 00:06:43.850 [2024-10-07 09:27:43.355525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.850 [2024-10-07 09:27:43.411328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@867 -- # return 0 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3131951 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3131951 00:06:44.568 09:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3131951 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' -z 3131951 ']' 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # kill -0 3131951 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # uname 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3131951 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3131951' 00:06:45.138 killing process with pid 3131951 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # kill 3131951 00:06:45.138 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@977 -- # wait 3131951 00:06:45.400 00:06:45.400 real 0m1.642s 00:06:45.400 user 0m1.757s 00:06:45.400 sys 0m0.575s 00:06:45.400 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:45.400 09:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.400 ************************************ 00:06:45.400 END TEST default_locks_via_rpc 00:06:45.400 ************************************ 00:06:45.400 09:27:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:45.400 09:27:44 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:45.400 09:27:44 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:45.400 09:27:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.400 ************************************ 00:06:45.400 START TEST non_locking_app_on_locked_coremask 00:06:45.400 ************************************ 00:06:45.400 09:27:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # non_locking_app_on_locked_coremask 00:06:45.400 09:27:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3132326 00:06:45.400 09:27:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3132326 /var/tmp/spdk.sock 00:06:45.400 09:27:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.400 09:27:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # '[' -z 3132326 ']' 00:06:45.400 09:27:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.400 09:27:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:45.400 09:27:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.400 09:27:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:45.400 09:27:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.400 [2024-10-07 09:27:44.992079] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:45.400 [2024-10-07 09:27:44.992127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132326 ] 00:06:45.661 [2024-10-07 09:27:45.067999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.661 [2024-10-07 09:27:45.122595] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # return 0 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3132589 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3132589 /var/tmp/spdk2.sock 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # '[' -z 3132589 ']' 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:46.234 09:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.234 [2024-10-07 09:27:45.806955] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:46.234 [2024-10-07 09:27:45.807005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132589 ] 00:06:46.234 [2024-10-07 09:27:45.876385] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.234 [2024-10-07 09:27:45.876410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.495 [2024-10-07 09:27:45.987008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.066 09:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:47.066 09:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # return 0 00:06:47.066 09:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3132326 00:06:47.066 09:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3132326 00:06:47.066 09:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.638 lslocks: write error 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3132326 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' -z 3132326 ']' 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # kill -0 3132326 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # uname 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3132326 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3132326' 00:06:47.638 killing process with pid 3132326 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # kill 3132326 00:06:47.638 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@977 -- # wait 3132326 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3132589 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' -z 3132589 ']' 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # kill -0 3132589 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # uname 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3132589 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3132589' 00:06:48.210 killing process with pid 3132589 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # kill 3132589 00:06:48.210 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@977 -- # wait 3132589 00:06:48.471 00:06:48.471 real 0m3.012s 00:06:48.471 user 0m3.345s 00:06:48.471 sys 0m0.912s 00:06:48.471 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:48.471 09:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.471 ************************************ 00:06:48.471 END TEST non_locking_app_on_locked_coremask 00:06:48.471 ************************************ 00:06:48.471 09:27:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:48.471 09:27:47 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:48.471 09:27:47 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:48.471 09:27:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.471 ************************************ 00:06:48.471 START TEST locking_app_on_unlocked_coremask 00:06:48.471 ************************************ 00:06:48.471 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # locking_app_on_unlocked_coremask 00:06:48.471 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3133037 00:06:48.471 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3133037 /var/tmp/spdk.sock 00:06:48.471 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:48.471 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # '[' -z 3133037 ']' 00:06:48.471 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.471 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:48.471 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.471 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:48.471 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.471 [2024-10-07 09:27:48.083352] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:48.471 [2024-10-07 09:27:48.083401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133037 ] 00:06:48.732 [2024-10-07 09:27:48.159167] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.732 [2024-10-07 09:27:48.159192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.732 [2024-10-07 09:27:48.214009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@867 -- # return 0 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3133152 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3133152 /var/tmp/spdk2.sock 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # '[' -z 3133152 ']' 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:49.303 09:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.303 [2024-10-07 09:27:48.944260] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:49.303 [2024-10-07 09:27:48.944315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133152 ] 00:06:49.564 [2024-10-07 09:27:49.015814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.564 [2024-10-07 09:27:49.126595] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.135 09:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:50.135 09:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@867 -- # return 0 00:06:50.135 09:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3133152 00:06:50.135 09:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3133152 00:06:50.135 09:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.707 lslocks: write error 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3133037 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' -z 3133037 ']' 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # kill -0 3133037 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # uname 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3133037 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3133037' 00:06:50.707 killing process with pid 3133037 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # kill 3133037 00:06:50.707 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@977 -- # wait 3133037 00:06:51.278 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3133152 00:06:51.278 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' -z 3133152 ']' 00:06:51.278 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # kill -0 3133152 00:06:51.278 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # uname 00:06:51.278 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:51.278 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3133152 00:06:51.279 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:51.279 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:51.279 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3133152' 00:06:51.279 killing process with pid 3133152 00:06:51.279 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # kill 3133152 00:06:51.279 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@977 -- # wait 3133152 00:06:51.540 00:06:51.540 real 0m2.975s 00:06:51.540 user 0m3.307s 00:06:51.540 sys 0m0.922s 00:06:51.540 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:51.540 09:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.540 ************************************ 00:06:51.540 END TEST locking_app_on_unlocked_coremask 00:06:51.540 ************************************ 00:06:51.540 09:27:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.540 09:27:51 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:51.540 09:27:51 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:51.540 09:27:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.540 ************************************ 00:06:51.540 START TEST locking_app_on_locked_coremask 00:06:51.540 ************************************ 00:06:51.540 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # locking_app_on_locked_coremask 00:06:51.540 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3133743 00:06:51.540 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3133743 /var/tmp/spdk.sock 00:06:51.540 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.540 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # '[' -z 3133743 ']' 00:06:51.540 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.540 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:51.540 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.540 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:51.540 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.540 [2024-10-07 09:27:51.132952] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:51.540 [2024-10-07 09:27:51.133009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133743 ] 00:06:51.801 [2024-10-07 09:27:51.210372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.801 [2024-10-07 09:27:51.270426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # return 0 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3133760 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3133760 /var/tmp/spdk2.sock 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # local es=0 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # valid_exec_arg waitforlisten 3133760 /var/tmp/spdk2.sock 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # local arg=waitforlisten 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # type -t waitforlisten 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@656 -- # waitforlisten 3133760 /var/tmp/spdk2.sock 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # '[' -z 3133760 ']' 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:52.373 09:27:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.373 [2024-10-07 09:27:51.972652] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:52.373 [2024-10-07 09:27:51.972706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133760 ] 00:06:52.634 [2024-10-07 09:27:52.048857] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3133743 has claimed it. 00:06:52.634 [2024-10-07 09:27:52.048891] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 849: kill: (3133760) - No such process 00:06:53.205 ERROR: process (pid: 3133760) is no longer running 00:06:53.205 09:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:53.205 09:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # return 1 00:06:53.205 09:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@656 -- # es=1 00:06:53.205 09:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:06:53.205 09:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:06:53.205 09:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:06:53.205 09:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3133743 00:06:53.205 09:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3133743 00:06:53.205 09:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.777 lslocks: write error 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3133743 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' -z 3133743 ']' 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # kill -0 3133743 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # uname 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3133743 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3133743' 00:06:53.777 killing process with pid 3133743 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # kill 3133743 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@977 -- # wait 3133743 00:06:53.777 00:06:53.777 real 0m2.327s 00:06:53.777 user 0m2.616s 00:06:53.777 sys 0m0.668s 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:53.777 09:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.777 ************************************ 00:06:53.777 END TEST locking_app_on_locked_coremask 00:06:53.777 ************************************ 00:06:54.038 09:27:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:54.038 09:27:53 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:54.038 09:27:53 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:54.038 09:27:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.038 ************************************ 00:06:54.038 START TEST locking_overlapped_coremask 00:06:54.038 ************************************ 00:06:54.038 09:27:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # locking_overlapped_coremask 00:06:54.038 09:27:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3134121 00:06:54.038 09:27:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:54.038 09:27:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3134121 /var/tmp/spdk.sock 00:06:54.038 09:27:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # '[' -z 3134121 ']' 00:06:54.038 09:27:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.038 09:27:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:54.038 09:27:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.038 09:27:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:54.038 09:27:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.038 [2024-10-07 09:27:53.530513] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:54.038 [2024-10-07 09:27:53.530564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134121 ] 00:06:54.038 [2024-10-07 09:27:53.607392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.038 [2024-10-07 09:27:53.665705] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.038 [2024-10-07 09:27:53.665908] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.038 [2024-10-07 09:27:53.665909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@867 -- # return 0 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3134423 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3134423 /var/tmp/spdk2.sock 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # local es=0 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # valid_exec_arg waitforlisten 3134423 /var/tmp/spdk2.sock 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # local arg=waitforlisten 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # type -t waitforlisten 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@656 -- # waitforlisten 3134423 /var/tmp/spdk2.sock 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # '[' -z 3134423 ']' 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:54.983 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.983 [2024-10-07 09:27:54.371878] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:54.983 [2024-10-07 09:27:54.371934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134423 ] 00:06:54.983 [2024-10-07 09:27:54.467921] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3134121 has claimed it. 00:06:54.983 [2024-10-07 09:27:54.467963] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 849: kill: (3134423) - No such process 00:06:55.554 ERROR: process (pid: 3134423) is no longer running 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@867 -- # return 1 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@656 -- # es=1 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3134121 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' -z 3134121 ']' 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # kill -0 3134121 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # uname 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:55.554 09:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3134121 00:06:55.554 09:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:55.554 09:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:55.554 09:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3134121' 00:06:55.554 killing process with pid 3134121 00:06:55.554 09:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # kill 3134121 00:06:55.554 09:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@977 -- # wait 3134121 00:06:55.815 00:06:55.815 real 0m1.764s 00:06:55.815 user 0m5.013s 00:06:55.815 sys 0m0.397s 00:06:55.815 09:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:55.815 09:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.816 ************************************ 00:06:55.816 END TEST locking_overlapped_coremask 00:06:55.816 ************************************ 00:06:55.816 09:27:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.816 09:27:55 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:55.816 09:27:55 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:55.816 09:27:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.816 ************************************ 00:06:55.816 START TEST locking_overlapped_coremask_via_rpc 00:06:55.816 ************************************ 00:06:55.816 09:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # locking_overlapped_coremask_via_rpc 00:06:55.816 09:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3134498 00:06:55.816 09:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3134498 /var/tmp/spdk.sock 00:06:55.816 09:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.816 09:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # '[' -z 3134498 ']' 00:06:55.816 09:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.816 09:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:55.816 09:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.816 09:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:55.816 09:27:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.816 [2024-10-07 09:27:55.386010] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:55.816 [2024-10-07 09:27:55.386067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134498 ] 00:06:55.816 [2024-10-07 09:27:55.466595] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.816 [2024-10-07 09:27:55.466626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.077 [2024-10-07 09:27:55.527647] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.077 [2024-10-07 09:27:55.527812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.077 [2024-10-07 09:27:55.527904] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # return 0 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3134828 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3134828 /var/tmp/spdk2.sock 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # '[' -z 3134828 ']' 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:56.648 09:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.648 [2024-10-07 09:27:56.232238] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:56.648 [2024-10-07 09:27:56.232291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134828 ] 00:06:56.648 [2024-10-07 09:27:56.305933] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.648 [2024-10-07 09:27:56.305955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.908 [2024-10-07 09:27:56.415633] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.908 [2024-10-07 09:27:56.415777] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.908 [2024-10-07 09:27:56.415778] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # return 0 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # local es=0 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@656 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.479 [2024-10-07 09:27:57.026674] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3134498 has claimed it. 00:06:57.479 request: 00:06:57.479 { 00:06:57.479 "method": "framework_enable_cpumask_locks", 00:06:57.479 "req_id": 1 00:06:57.479 } 00:06:57.479 Got JSON-RPC error response 00:06:57.479 response: 00:06:57.479 { 00:06:57.479 "code": -32603, 00:06:57.479 "message": "Failed to claim CPU core: 2" 00:06:57.479 } 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@656 -- # es=1 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3134498 /var/tmp/spdk.sock 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # '[' -z 3134498 ']' 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:57.479 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # return 0 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3134828 /var/tmp/spdk2.sock 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # '[' -z 3134828 ']' 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # return 0 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.740 00:06:57.740 real 0m2.084s 00:06:57.740 user 0m0.874s 00:06:57.740 sys 0m0.139s 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:57.740 09:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.740 ************************************ 00:06:57.740 END TEST locking_overlapped_coremask_via_rpc 00:06:57.740 ************************************ 00:06:58.001 09:27:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:58.001 09:27:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3134498 ]] 00:06:58.001 09:27:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3134498 00:06:58.001 09:27:57 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' -z 3134498 ']' 00:06:58.001 09:27:57 event.cpu_locks -- common/autotest_common.sh@957 -- # kill -0 3134498 00:06:58.001 09:27:57 event.cpu_locks -- common/autotest_common.sh@958 -- # uname 00:06:58.001 09:27:57 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:58.001 09:27:57 event.cpu_locks -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3134498 00:06:58.001 09:27:57 event.cpu_locks -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:06:58.001 09:27:57 event.cpu_locks -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:06:58.001 09:27:57 event.cpu_locks -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3134498' 00:06:58.001 killing process with pid 3134498 00:06:58.001 09:27:57 event.cpu_locks -- common/autotest_common.sh@972 -- # kill 3134498 00:06:58.001 09:27:57 event.cpu_locks -- common/autotest_common.sh@977 -- # wait 3134498 00:06:58.262 09:27:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3134828 ]] 00:06:58.262 09:27:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3134828 00:06:58.262 09:27:57 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' -z 3134828 ']' 00:06:58.262 09:27:57 event.cpu_locks -- common/autotest_common.sh@957 -- # kill -0 3134828 00:06:58.262 09:27:57 event.cpu_locks -- common/autotest_common.sh@958 -- # uname 00:06:58.262 09:27:57 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:06:58.262 09:27:57 event.cpu_locks -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3134828 00:06:58.262 09:27:57 event.cpu_locks -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:06:58.262 09:27:57 event.cpu_locks -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:06:58.262 09:27:57 event.cpu_locks -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3134828' 00:06:58.262 killing process with pid 3134828 00:06:58.262 09:27:57 event.cpu_locks -- common/autotest_common.sh@972 -- # kill 3134828 00:06:58.262 09:27:57 event.cpu_locks -- common/autotest_common.sh@977 -- # wait 3134828 00:06:58.524 09:27:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.524 09:27:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.524 09:27:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3134498 ]] 00:06:58.524 09:27:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3134498 00:06:58.524 09:27:57 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' -z 3134498 ']' 00:06:58.524 09:27:57 event.cpu_locks -- common/autotest_common.sh@957 -- # kill -0 3134498 00:06:58.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 957: kill: (3134498) - No such process 00:06:58.524 09:27:57 event.cpu_locks -- common/autotest_common.sh@980 -- # echo 'Process with pid 3134498 is not found' 00:06:58.524 Process with pid 3134498 is not found 00:06:58.524 09:27:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3134828 ]] 00:06:58.524 09:27:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3134828 00:06:58.524 09:27:57 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' -z 3134828 ']' 00:06:58.524 09:27:57 event.cpu_locks -- common/autotest_common.sh@957 -- # kill -0 3134828 00:06:58.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 957: kill: (3134828) - No such process 00:06:58.524 09:27:57 event.cpu_locks -- common/autotest_common.sh@980 -- # echo 'Process with pid 3134828 is not found' 00:06:58.524 Process with pid 3134828 is not found 00:06:58.524 09:27:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.524 00:06:58.524 real 0m16.805s 00:06:58.524 user 0m28.748s 00:06:58.524 sys 0m5.195s 00:06:58.524 09:27:57 event.cpu_locks -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:58.524 09:27:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.524 ************************************ 00:06:58.524 END TEST cpu_locks 00:06:58.524 ************************************ 00:06:58.524 00:06:58.524 real 0m43.489s 00:06:58.524 user 1m24.729s 00:06:58.524 sys 0m8.732s 00:06:58.524 09:27:58 event -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:58.525 09:27:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.525 ************************************ 00:06:58.525 END TEST event 00:06:58.525 ************************************ 00:06:58.525 09:27:58 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:58.525 09:27:58 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:06:58.525 09:27:58 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:58.525 09:27:58 -- common/autotest_common.sh@10 -- # set +x 00:06:58.525 ************************************ 00:06:58.525 START TEST thread 00:06:58.525 ************************************ 00:06:58.525 09:27:58 thread -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:58.786 * Looking for test storage... 00:06:58.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:58.786 09:27:58 thread -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:06:58.786 09:27:58 thread -- common/autotest_common.sh@1626 -- # lcov --version 00:06:58.786 09:27:58 thread -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:06:58.786 09:27:58 thread -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:06:58.786 09:27:58 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.786 09:27:58 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.786 09:27:58 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.786 09:27:58 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.786 09:27:58 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.786 09:27:58 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.786 09:27:58 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.786 09:27:58 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.786 09:27:58 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.786 09:27:58 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.786 09:27:58 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.786 09:27:58 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:58.786 09:27:58 thread -- scripts/common.sh@345 -- # : 1 00:06:58.786 09:27:58 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.786 09:27:58 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.786 09:27:58 thread -- scripts/common.sh@365 -- # decimal 1 00:06:58.786 09:27:58 thread -- scripts/common.sh@353 -- # local d=1 00:06:58.786 09:27:58 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.786 09:27:58 thread -- scripts/common.sh@355 -- # echo 1 00:06:58.786 09:27:58 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.786 09:27:58 thread -- scripts/common.sh@366 -- # decimal 2 00:06:58.786 09:27:58 thread -- scripts/common.sh@353 -- # local d=2 00:06:58.786 09:27:58 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.786 09:27:58 thread -- scripts/common.sh@355 -- # echo 2 00:06:58.786 09:27:58 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.786 09:27:58 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.786 09:27:58 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.786 09:27:58 thread -- scripts/common.sh@368 -- # return 0 00:06:58.786 09:27:58 thread -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.786 09:27:58 thread -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:06:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.787 --rc genhtml_branch_coverage=1 00:06:58.787 --rc genhtml_function_coverage=1 00:06:58.787 --rc genhtml_legend=1 00:06:58.787 --rc geninfo_all_blocks=1 00:06:58.787 --rc geninfo_unexecuted_blocks=1 00:06:58.787 00:06:58.787 ' 00:06:58.787 09:27:58 thread -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:06:58.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.787 --rc genhtml_branch_coverage=1 00:06:58.787 --rc genhtml_function_coverage=1 00:06:58.787 --rc genhtml_legend=1 00:06:58.787 --rc geninfo_all_blocks=1 00:06:58.787 --rc geninfo_unexecuted_blocks=1 00:06:58.787 00:06:58.787 ' 00:06:58.787 09:27:58 thread -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:06:58.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.787 --rc genhtml_branch_coverage=1 00:06:58.787 --rc genhtml_function_coverage=1 00:06:58.787 --rc genhtml_legend=1 00:06:58.787 --rc geninfo_all_blocks=1 00:06:58.787 --rc geninfo_unexecuted_blocks=1 00:06:58.787 00:06:58.787 ' 00:06:58.787 09:27:58 thread -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:06:58.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.787 --rc genhtml_branch_coverage=1 00:06:58.787 --rc genhtml_function_coverage=1 00:06:58.787 --rc genhtml_legend=1 00:06:58.787 --rc geninfo_all_blocks=1 00:06:58.787 --rc geninfo_unexecuted_blocks=1 00:06:58.787 00:06:58.787 ' 00:06:58.787 09:27:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.787 09:27:58 thread -- common/autotest_common.sh@1104 -- # '[' 8 -le 1 ']' 00:06:58.787 09:27:58 thread -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:58.787 09:27:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.787 ************************************ 00:06:58.787 START TEST thread_poller_perf 00:06:58.787 ************************************ 00:06:58.787 09:27:58 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.787 [2024-10-07 09:27:58.398423] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:06:58.787 [2024-10-07 09:27:58.398528] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135284 ] 00:06:59.048 [2024-10-07 09:27:58.479662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.048 [2024-10-07 09:27:58.549265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.048 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.989 ====================================== 00:06:59.989 busy:2407750632 (cyc) 00:06:59.989 total_run_count: 419000 00:06:59.989 tsc_hz: 2400000000 (cyc) 00:06:59.989 ====================================== 00:06:59.989 poller_cost: 5746 (cyc), 2394 (nsec) 00:06:59.989 00:06:59.989 real 0m1.224s 00:06:59.989 user 0m1.127s 00:06:59.989 sys 0m0.092s 00:06:59.989 09:27:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:06:59.989 09:27:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.989 ************************************ 00:06:59.989 END TEST thread_poller_perf 00:06:59.989 ************************************ 00:06:59.989 09:27:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.989 09:27:59 thread -- common/autotest_common.sh@1104 -- # '[' 8 -le 1 ']' 00:06:59.989 09:27:59 thread -- common/autotest_common.sh@1110 -- # xtrace_disable 00:06:59.989 09:27:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.250 ************************************ 00:07:00.250 START TEST thread_poller_perf 00:07:00.250 ************************************ 00:07:00.250 09:27:59 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.250 [2024-10-07 09:27:59.700722] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:07:00.250 [2024-10-07 09:27:59.700824] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135632 ] 00:07:00.250 [2024-10-07 09:27:59.780325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.250 [2024-10-07 09:27:59.849012] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.250 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.636 ====================================== 00:07:01.636 busy:2401670440 (cyc) 00:07:01.636 total_run_count: 5561000 00:07:01.636 tsc_hz: 2400000000 (cyc) 00:07:01.636 ====================================== 00:07:01.636 poller_cost: 431 (cyc), 179 (nsec) 00:07:01.636 00:07:01.636 real 0m1.215s 00:07:01.636 user 0m1.129s 00:07:01.636 sys 0m0.082s 00:07:01.636 09:28:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:07:01.636 09:28:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.636 ************************************ 00:07:01.636 END TEST thread_poller_perf 00:07:01.636 ************************************ 00:07:01.636 09:28:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.636 00:07:01.636 real 0m2.821s 00:07:01.636 user 0m2.426s 00:07:01.636 sys 0m0.407s 00:07:01.636 09:28:00 thread -- common/autotest_common.sh@1129 -- # xtrace_disable 00:07:01.636 09:28:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.636 ************************************ 00:07:01.636 END TEST thread 00:07:01.636 ************************************ 00:07:01.636 09:28:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:01.636 09:28:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.636 09:28:00 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:07:01.636 09:28:00 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:07:01.636 09:28:00 -- common/autotest_common.sh@10 -- # set +x 00:07:01.636 ************************************ 00:07:01.636 START TEST app_cmdline 00:07:01.636 ************************************ 00:07:01.636 09:28:01 app_cmdline -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.636 * Looking for test storage... 00:07:01.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.636 09:28:01 app_cmdline -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:07:01.636 09:28:01 app_cmdline -- common/autotest_common.sh@1626 -- # lcov --version 00:07:01.636 09:28:01 app_cmdline -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:07:01.636 09:28:01 app_cmdline -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.636 09:28:01 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:01.636 09:28:01 app_cmdline -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.636 09:28:01 app_cmdline -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:07:01.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.636 --rc genhtml_branch_coverage=1 00:07:01.636 --rc genhtml_function_coverage=1 00:07:01.636 --rc genhtml_legend=1 00:07:01.636 --rc geninfo_all_blocks=1 00:07:01.636 --rc geninfo_unexecuted_blocks=1 00:07:01.636 00:07:01.636 ' 00:07:01.636 09:28:01 app_cmdline -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:07:01.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.636 --rc genhtml_branch_coverage=1 00:07:01.636 --rc genhtml_function_coverage=1 00:07:01.636 --rc genhtml_legend=1 00:07:01.636 --rc geninfo_all_blocks=1 00:07:01.636 --rc geninfo_unexecuted_blocks=1 00:07:01.636 00:07:01.636 ' 00:07:01.636 09:28:01 app_cmdline -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:07:01.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.636 --rc genhtml_branch_coverage=1 00:07:01.636 --rc genhtml_function_coverage=1 00:07:01.636 --rc genhtml_legend=1 00:07:01.636 --rc geninfo_all_blocks=1 00:07:01.636 --rc geninfo_unexecuted_blocks=1 00:07:01.636 00:07:01.636 ' 00:07:01.636 09:28:01 app_cmdline -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:07:01.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.637 --rc genhtml_branch_coverage=1 00:07:01.637 --rc genhtml_function_coverage=1 00:07:01.637 --rc genhtml_legend=1 00:07:01.637 --rc geninfo_all_blocks=1 00:07:01.637 --rc geninfo_unexecuted_blocks=1 00:07:01.637 00:07:01.637 ' 00:07:01.637 09:28:01 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.637 09:28:01 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3136044 00:07:01.637 09:28:01 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3136044 00:07:01.637 09:28:01 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.637 09:28:01 app_cmdline -- common/autotest_common.sh@834 -- # '[' -z 3136044 ']' 00:07:01.637 09:28:01 app_cmdline -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.637 09:28:01 app_cmdline -- common/autotest_common.sh@839 -- # local max_retries=100 00:07:01.637 09:28:01 app_cmdline -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.637 09:28:01 app_cmdline -- common/autotest_common.sh@843 -- # xtrace_disable 00:07:01.637 09:28:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.897 [2024-10-07 09:28:01.318594] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:07:01.897 [2024-10-07 09:28:01.318675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3136044 ] 00:07:01.897 [2024-10-07 09:28:01.398344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.897 [2024-10-07 09:28:01.469117] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.467 09:28:02 app_cmdline -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:07:02.467 09:28:02 app_cmdline -- common/autotest_common.sh@867 -- # return 0 00:07:02.467 09:28:02 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:02.727 { 00:07:02.727 "version": "SPDK v25.01-pre git sha1 70750b651", 00:07:02.727 "fields": { 00:07:02.727 "major": 25, 00:07:02.727 "minor": 1, 00:07:02.727 "patch": 0, 00:07:02.727 "suffix": "-pre", 00:07:02.727 "commit": "70750b651" 00:07:02.727 } 00:07:02.727 } 00:07:02.727 09:28:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.727 09:28:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.727 09:28:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.727 09:28:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.727 09:28:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.727 09:28:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@564 -- # xtrace_disable 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.727 09:28:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:07:02.727 09:28:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.727 09:28:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.727 09:28:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@653 -- # local es=0 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@655 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@641 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@645 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@647 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@647 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@647 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:02.727 09:28:02 app_cmdline -- common/autotest_common.sh@656 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.987 request: 00:07:02.987 { 00:07:02.987 "method": "env_dpdk_get_mem_stats", 00:07:02.987 "req_id": 1 00:07:02.987 } 00:07:02.987 Got JSON-RPC error response 00:07:02.987 response: 00:07:02.987 { 00:07:02.987 "code": -32601, 00:07:02.987 "message": "Method not found" 00:07:02.987 } 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@656 -- # es=1 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:07:02.987 09:28:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3136044 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@953 -- # '[' -z 3136044 ']' 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@957 -- # kill -0 3136044 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@958 -- # uname 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3136044 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3136044' 00:07:02.987 killing process with pid 3136044 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@972 -- # kill 3136044 00:07:02.987 09:28:02 app_cmdline -- common/autotest_common.sh@977 -- # wait 3136044 00:07:03.247 00:07:03.247 real 0m1.750s 00:07:03.247 user 0m2.054s 00:07:03.247 sys 0m0.490s 00:07:03.247 09:28:02 app_cmdline -- common/autotest_common.sh@1129 -- # xtrace_disable 00:07:03.247 09:28:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.247 ************************************ 00:07:03.247 END TEST app_cmdline 00:07:03.247 ************************************ 00:07:03.247 09:28:02 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:03.247 09:28:02 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:07:03.247 09:28:02 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:07:03.247 09:28:02 -- common/autotest_common.sh@10 -- # set +x 00:07:03.247 ************************************ 00:07:03.247 START TEST version 00:07:03.247 ************************************ 00:07:03.247 09:28:02 version -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:03.508 * Looking for test storage... 00:07:03.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:03.508 09:28:02 version -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:07:03.508 09:28:02 version -- common/autotest_common.sh@1626 -- # lcov --version 00:07:03.508 09:28:02 version -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:07:03.508 09:28:03 version -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:07:03.508 09:28:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.508 09:28:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.508 09:28:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.508 09:28:03 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.508 09:28:03 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.508 09:28:03 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.508 09:28:03 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.508 09:28:03 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.508 09:28:03 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.508 09:28:03 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.508 09:28:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.508 09:28:03 version -- scripts/common.sh@344 -- # case "$op" in 00:07:03.508 09:28:03 version -- scripts/common.sh@345 -- # : 1 00:07:03.508 09:28:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.508 09:28:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.508 09:28:03 version -- scripts/common.sh@365 -- # decimal 1 00:07:03.508 09:28:03 version -- scripts/common.sh@353 -- # local d=1 00:07:03.508 09:28:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.508 09:28:03 version -- scripts/common.sh@355 -- # echo 1 00:07:03.508 09:28:03 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.508 09:28:03 version -- scripts/common.sh@366 -- # decimal 2 00:07:03.508 09:28:03 version -- scripts/common.sh@353 -- # local d=2 00:07:03.508 09:28:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.508 09:28:03 version -- scripts/common.sh@355 -- # echo 2 00:07:03.508 09:28:03 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.508 09:28:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.508 09:28:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.508 09:28:03 version -- scripts/common.sh@368 -- # return 0 00:07:03.508 09:28:03 version -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.508 09:28:03 version -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:07:03.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.508 --rc genhtml_branch_coverage=1 00:07:03.508 --rc genhtml_function_coverage=1 00:07:03.508 --rc genhtml_legend=1 00:07:03.508 --rc geninfo_all_blocks=1 00:07:03.508 --rc geninfo_unexecuted_blocks=1 00:07:03.508 00:07:03.508 ' 00:07:03.508 09:28:03 version -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:07:03.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.508 --rc genhtml_branch_coverage=1 00:07:03.508 --rc genhtml_function_coverage=1 00:07:03.508 --rc genhtml_legend=1 00:07:03.508 --rc geninfo_all_blocks=1 00:07:03.508 --rc geninfo_unexecuted_blocks=1 00:07:03.508 00:07:03.508 ' 00:07:03.508 09:28:03 version -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:07:03.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.508 --rc genhtml_branch_coverage=1 00:07:03.508 --rc genhtml_function_coverage=1 00:07:03.508 --rc genhtml_legend=1 00:07:03.508 --rc geninfo_all_blocks=1 00:07:03.508 --rc geninfo_unexecuted_blocks=1 00:07:03.508 00:07:03.508 ' 00:07:03.508 09:28:03 version -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:07:03.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.508 --rc genhtml_branch_coverage=1 00:07:03.508 --rc genhtml_function_coverage=1 00:07:03.508 --rc genhtml_legend=1 00:07:03.508 --rc geninfo_all_blocks=1 00:07:03.508 --rc geninfo_unexecuted_blocks=1 00:07:03.508 00:07:03.508 ' 00:07:03.508 09:28:03 version -- app/version.sh@17 -- # get_header_version major 00:07:03.508 09:28:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.508 09:28:03 version -- app/version.sh@14 -- # cut -f2 00:07:03.508 09:28:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.508 09:28:03 version -- app/version.sh@17 -- # major=25 00:07:03.508 09:28:03 version -- app/version.sh@18 -- # get_header_version minor 00:07:03.508 09:28:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.508 09:28:03 version -- app/version.sh@14 -- # cut -f2 00:07:03.508 09:28:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.508 09:28:03 version -- app/version.sh@18 -- # minor=1 00:07:03.508 09:28:03 version -- app/version.sh@19 -- # get_header_version patch 00:07:03.508 09:28:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.508 09:28:03 version -- app/version.sh@14 -- # cut -f2 00:07:03.508 09:28:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.508 09:28:03 version -- app/version.sh@19 -- # patch=0 00:07:03.508 09:28:03 version -- app/version.sh@20 -- # get_header_version suffix 00:07:03.508 09:28:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.508 09:28:03 version -- app/version.sh@14 -- # cut -f2 00:07:03.508 09:28:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.508 09:28:03 version -- app/version.sh@20 -- # suffix=-pre 00:07:03.508 09:28:03 version -- app/version.sh@22 -- # version=25.1 00:07:03.508 09:28:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:03.508 09:28:03 version -- app/version.sh@28 -- # version=25.1rc0 00:07:03.508 09:28:03 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:03.508 09:28:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:03.508 09:28:03 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:03.508 09:28:03 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:03.508 00:07:03.508 real 0m0.303s 00:07:03.508 user 0m0.188s 00:07:03.508 sys 0m0.162s 00:07:03.508 09:28:03 version -- common/autotest_common.sh@1129 -- # xtrace_disable 00:07:03.508 09:28:03 version -- common/autotest_common.sh@10 -- # set +x 00:07:03.508 ************************************ 00:07:03.508 END TEST version 00:07:03.508 ************************************ 00:07:03.769 09:28:03 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:03.769 09:28:03 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:03.769 09:28:03 -- spdk/autotest.sh@194 -- # uname -s 00:07:03.769 09:28:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:03.769 09:28:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:03.769 09:28:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:03.769 09:28:03 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:03.769 09:28:03 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:03.769 09:28:03 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:03.769 09:28:03 -- common/autotest_common.sh@733 -- # xtrace_disable 00:07:03.769 09:28:03 -- common/autotest_common.sh@10 -- # set +x 00:07:03.769 09:28:03 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:03.769 09:28:03 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:03.769 09:28:03 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:03.769 09:28:03 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:03.769 09:28:03 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:03.769 09:28:03 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:03.769 09:28:03 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:03.769 09:28:03 -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:07:03.769 09:28:03 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:07:03.769 09:28:03 -- common/autotest_common.sh@10 -- # set +x 00:07:03.769 ************************************ 00:07:03.769 START TEST nvmf_tcp 00:07:03.769 ************************************ 00:07:03.769 09:28:03 nvmf_tcp -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:03.769 * Looking for test storage... 00:07:03.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:03.769 09:28:03 nvmf_tcp -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:07:03.769 09:28:03 nvmf_tcp -- common/autotest_common.sh@1626 -- # lcov --version 00:07:03.769 09:28:03 nvmf_tcp -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:07:04.030 09:28:03 nvmf_tcp -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.030 09:28:03 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:04.030 09:28:03 nvmf_tcp -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.030 09:28:03 nvmf_tcp -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:07:04.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.030 --rc genhtml_branch_coverage=1 00:07:04.030 --rc genhtml_function_coverage=1 00:07:04.030 --rc genhtml_legend=1 00:07:04.030 --rc geninfo_all_blocks=1 00:07:04.030 --rc geninfo_unexecuted_blocks=1 00:07:04.030 00:07:04.030 ' 00:07:04.030 09:28:03 nvmf_tcp -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:07:04.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.030 --rc genhtml_branch_coverage=1 00:07:04.030 --rc genhtml_function_coverage=1 00:07:04.030 --rc genhtml_legend=1 00:07:04.030 --rc geninfo_all_blocks=1 00:07:04.030 --rc geninfo_unexecuted_blocks=1 00:07:04.030 00:07:04.030 ' 00:07:04.030 09:28:03 nvmf_tcp -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:07:04.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.030 --rc genhtml_branch_coverage=1 00:07:04.030 --rc genhtml_function_coverage=1 00:07:04.030 --rc genhtml_legend=1 00:07:04.030 --rc geninfo_all_blocks=1 00:07:04.030 --rc geninfo_unexecuted_blocks=1 00:07:04.030 00:07:04.030 ' 00:07:04.030 09:28:03 nvmf_tcp -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:07:04.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.030 --rc genhtml_branch_coverage=1 00:07:04.030 --rc genhtml_function_coverage=1 00:07:04.030 --rc genhtml_legend=1 00:07:04.030 --rc geninfo_all_blocks=1 00:07:04.030 --rc geninfo_unexecuted_blocks=1 00:07:04.030 00:07:04.030 ' 00:07:04.030 09:28:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:04.030 09:28:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:04.030 09:28:03 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:04.030 09:28:03 nvmf_tcp -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:07:04.030 09:28:03 nvmf_tcp -- common/autotest_common.sh@1110 -- # xtrace_disable 00:07:04.030 09:28:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.030 ************************************ 00:07:04.030 START TEST nvmf_target_core 00:07:04.030 ************************************ 00:07:04.030 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:04.030 * Looking for test storage... 00:07:04.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:04.030 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:07:04.030 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1626 -- # lcov --version 00:07:04.030 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:07:04.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.292 --rc genhtml_branch_coverage=1 00:07:04.292 --rc genhtml_function_coverage=1 00:07:04.292 --rc genhtml_legend=1 00:07:04.292 --rc geninfo_all_blocks=1 00:07:04.292 --rc geninfo_unexecuted_blocks=1 00:07:04.292 00:07:04.292 ' 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:07:04.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.292 --rc genhtml_branch_coverage=1 00:07:04.292 --rc genhtml_function_coverage=1 00:07:04.292 --rc genhtml_legend=1 00:07:04.292 --rc geninfo_all_blocks=1 00:07:04.292 --rc geninfo_unexecuted_blocks=1 00:07:04.292 00:07:04.292 ' 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:07:04.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.292 --rc genhtml_branch_coverage=1 00:07:04.292 --rc genhtml_function_coverage=1 00:07:04.292 --rc genhtml_legend=1 00:07:04.292 --rc geninfo_all_blocks=1 00:07:04.292 --rc geninfo_unexecuted_blocks=1 00:07:04.292 00:07:04.292 ' 00:07:04.292 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:07:04.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.293 --rc genhtml_branch_coverage=1 00:07:04.293 --rc genhtml_function_coverage=1 00:07:04.293 --rc genhtml_legend=1 00:07:04.293 --rc geninfo_all_blocks=1 00:07:04.293 --rc geninfo_unexecuted_blocks=1 00:07:04.293 00:07:04.293 ' 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:04.293 ************************************ 00:07:04.293 START TEST nvmf_abort 00:07:04.293 ************************************ 00:07:04.293 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:04.555 * Looking for test storage... 00:07:04.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.555 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:07:04.555 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1626 -- # lcov --version 00:07:04.555 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:04.555 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:07:04.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.556 --rc genhtml_branch_coverage=1 00:07:04.556 --rc genhtml_function_coverage=1 00:07:04.556 --rc genhtml_legend=1 00:07:04.556 --rc geninfo_all_blocks=1 00:07:04.556 --rc geninfo_unexecuted_blocks=1 00:07:04.556 00:07:04.556 ' 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:07:04.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.556 --rc genhtml_branch_coverage=1 00:07:04.556 --rc genhtml_function_coverage=1 00:07:04.556 --rc genhtml_legend=1 00:07:04.556 --rc geninfo_all_blocks=1 00:07:04.556 --rc geninfo_unexecuted_blocks=1 00:07:04.556 00:07:04.556 ' 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:07:04.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.556 --rc genhtml_branch_coverage=1 00:07:04.556 --rc genhtml_function_coverage=1 00:07:04.556 --rc genhtml_legend=1 00:07:04.556 --rc geninfo_all_blocks=1 00:07:04.556 --rc geninfo_unexecuted_blocks=1 00:07:04.556 00:07:04.556 ' 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:07:04.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.556 --rc genhtml_branch_coverage=1 00:07:04.556 --rc genhtml_function_coverage=1 00:07:04.556 --rc genhtml_legend=1 00:07:04.556 --rc geninfo_all_blocks=1 00:07:04.556 --rc geninfo_unexecuted_blocks=1 00:07:04.556 00:07:04.556 ' 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.556 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:04.557 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:12.702 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:12.702 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:12.702 Found net devices under 0000:31:00.0: cvl_0_0 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:12.702 Found net devices under 0000:31:00.1: cvl_0_1 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.702 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:12.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:07:12.703 00:07:12.703 --- 10.0.0.2 ping statistics --- 00:07:12.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.703 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:07:12.703 00:07:12.703 --- 10.0.0.1 ping statistics --- 00:07:12.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.703 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3140624 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3140624 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # '[' -z 3140624 ']' 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local max_retries=100 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@843 -- # xtrace_disable 00:07:12.703 09:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.703 [2024-10-07 09:28:11.932732] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:07:12.703 [2024-10-07 09:28:11.932794] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.703 [2024-10-07 09:28:12.024060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.703 [2024-10-07 09:28:12.120252] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.703 [2024-10-07 09:28:12.120321] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.703 [2024-10-07 09:28:12.120330] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.703 [2024-10-07 09:28:12.120338] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.703 [2024-10-07 09:28:12.120344] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.703 [2024-10-07 09:28:12.121691] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.703 [2024-10-07 09:28:12.121871] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.703 [2024-10-07 09:28:12.121871] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@867 -- # return 0 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@733 -- # xtrace_disable 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.275 [2024-10-07 09:28:12.827804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.275 Malloc0 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.275 Delay0 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.275 [2024-10-07 09:28:12.913169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:07:13.275 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:13.536 [2024-10-07 09:28:13.044363] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:15.453 Initializing NVMe Controllers 00:07:15.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:15.453 controller IO queue size 128 less than required 00:07:15.453 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:15.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:15.453 Initialization complete. Launching workers. 00:07:15.453 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28488 00:07:15.453 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28549, failed to submit 62 00:07:15.453 success 28492, unsuccessful 57, failed 0 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.453 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.453 rmmod nvme_tcp 00:07:15.713 rmmod nvme_fabrics 00:07:15.713 rmmod nvme_keyring 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3140624 ']' 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3140624 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' -z 3140624 ']' 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # kill -0 3140624 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # uname 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3140624 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3140624' 00:07:15.713 killing process with pid 3140624 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # kill 3140624 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@977 -- # wait 3140624 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:15.713 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:07:15.714 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:15.714 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:07:15.714 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:15.714 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:15.714 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.714 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.714 09:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.263 00:07:18.263 real 0m13.586s 00:07:18.263 user 0m13.716s 00:07:18.263 sys 0m6.784s 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # xtrace_disable 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.263 ************************************ 00:07:18.263 END TEST nvmf_abort 00:07:18.263 ************************************ 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.263 ************************************ 00:07:18.263 START TEST nvmf_ns_hotplug_stress 00:07:18.263 ************************************ 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:18.263 * Looking for test storage... 00:07:18.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1626 -- # lcov --version 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:07:18.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.263 --rc genhtml_branch_coverage=1 00:07:18.263 --rc genhtml_function_coverage=1 00:07:18.263 --rc genhtml_legend=1 00:07:18.263 --rc geninfo_all_blocks=1 00:07:18.263 --rc geninfo_unexecuted_blocks=1 00:07:18.263 00:07:18.263 ' 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:07:18.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.263 --rc genhtml_branch_coverage=1 00:07:18.263 --rc genhtml_function_coverage=1 00:07:18.263 --rc genhtml_legend=1 00:07:18.263 --rc geninfo_all_blocks=1 00:07:18.263 --rc geninfo_unexecuted_blocks=1 00:07:18.263 00:07:18.263 ' 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:07:18.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.263 --rc genhtml_branch_coverage=1 00:07:18.263 --rc genhtml_function_coverage=1 00:07:18.263 --rc genhtml_legend=1 00:07:18.263 --rc geninfo_all_blocks=1 00:07:18.263 --rc geninfo_unexecuted_blocks=1 00:07:18.263 00:07:18.263 ' 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:07:18.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.263 --rc genhtml_branch_coverage=1 00:07:18.263 --rc genhtml_function_coverage=1 00:07:18.263 --rc genhtml_legend=1 00:07:18.263 --rc geninfo_all_blocks=1 00:07:18.263 --rc geninfo_unexecuted_blocks=1 00:07:18.263 00:07:18.263 ' 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.263 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.264 09:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:26.471 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:26.471 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:26.471 Found net devices under 0000:31:00.0: cvl_0_0 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:26.471 Found net devices under 0000:31:00.1: cvl_0_1 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.471 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:26.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:07:26.472 00:07:26.472 --- 10.0.0.2 ping statistics --- 00:07:26.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.472 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:07:26.472 00:07:26.472 --- 10.0.0.1 ping statistics --- 00:07:26.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.472 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3145695 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3145695 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # '[' -z 3145695 ']' 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local max_retries=100 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@843 -- # xtrace_disable 00:07:26.472 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.472 [2024-10-07 09:28:25.573426] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:07:26.472 [2024-10-07 09:28:25.573488] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.472 [2024-10-07 09:28:25.663803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.472 [2024-10-07 09:28:25.758328] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.472 [2024-10-07 09:28:25.758392] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.472 [2024-10-07 09:28:25.758401] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.472 [2024-10-07 09:28:25.758408] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.472 [2024-10-07 09:28:25.758415] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.472 [2024-10-07 09:28:25.759753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.472 [2024-10-07 09:28:25.760015] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.472 [2024-10-07 09:28:25.760015] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.045 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:07:27.045 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@867 -- # return 0 00:07:27.045 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:27.045 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@733 -- # xtrace_disable 00:07:27.045 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:27.045 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.045 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:27.045 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:27.045 [2024-10-07 09:28:26.610876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.045 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:27.305 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.565 [2024-10-07 09:28:27.017551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.566 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.826 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:27.826 Malloc0 00:07:27.826 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:28.087 Delay0 00:07:28.087 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.347 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:28.347 NULL1 00:07:28.607 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:28.607 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3146117 00:07:28.607 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:28.607 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:28.607 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.867 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.127 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:29.127 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:29.127 true 00:07:29.127 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:29.127 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.388 09:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.650 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:29.650 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:29.650 true 00:07:29.650 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:29.650 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.912 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.172 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:30.172 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:30.172 true 00:07:30.172 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:30.172 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.433 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.693 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:30.693 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:30.953 true 00:07:30.953 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:30.953 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.953 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.213 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:31.213 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:31.474 true 00:07:31.474 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:31.474 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.474 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.735 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:31.735 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:31.996 true 00:07:31.996 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:31.996 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.996 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.258 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:32.258 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:32.519 true 00:07:32.519 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:32.519 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.519 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.780 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:32.780 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:33.041 true 00:07:33.041 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:33.041 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.303 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.303 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:33.303 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:33.563 true 00:07:33.563 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:33.563 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.824 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.824 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:33.824 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:34.086 true 00:07:34.086 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:34.086 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.348 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.611 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:34.611 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:34.611 true 00:07:34.611 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:34.611 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.872 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.132 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:35.132 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:35.132 true 00:07:35.132 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:35.132 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.392 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.653 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:35.653 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:35.653 true 00:07:35.653 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:35.653 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.914 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.176 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:36.176 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:36.176 true 00:07:36.436 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:36.436 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.436 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.696 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:36.696 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:36.957 true 00:07:36.957 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:36.957 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.957 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.219 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:37.219 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:37.480 true 00:07:37.480 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:37.480 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.480 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.740 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:37.740 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:38.000 true 00:07:38.000 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:38.000 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.260 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.260 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:38.260 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:38.521 true 00:07:38.521 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:38.521 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.782 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.782 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:38.782 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:39.112 true 00:07:39.112 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:39.112 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.112 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.372 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:39.372 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:39.631 true 00:07:39.631 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:39.632 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.632 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.892 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:39.892 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:40.151 true 00:07:40.151 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:40.151 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.422 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.422 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:40.422 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:40.683 true 00:07:40.683 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:40.683 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.943 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.943 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:40.943 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:41.203 true 00:07:41.203 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:41.203 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.463 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.463 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:41.463 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:41.722 true 00:07:41.722 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:41.722 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.981 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.240 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:42.240 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:42.240 true 00:07:42.240 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:42.240 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.501 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.762 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:42.762 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:42.762 true 00:07:42.762 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:42.762 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.022 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.282 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:43.282 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:43.282 true 00:07:43.282 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:43.282 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.542 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.801 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:43.801 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:43.801 true 00:07:43.801 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:43.802 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.063 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.323 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:44.323 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:44.323 true 00:07:44.585 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:44.585 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.585 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.847 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:44.847 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:45.108 true 00:07:45.108 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:45.108 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.108 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.369 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:45.369 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:45.629 true 00:07:45.629 09:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:45.629 09:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.891 09:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.891 09:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:45.891 09:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:46.152 true 00:07:46.152 09:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:46.152 09:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.413 09:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.413 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:46.413 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:46.675 true 00:07:46.675 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:46.675 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.935 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.196 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:47.196 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:47.196 true 00:07:47.196 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:47.196 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.457 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.718 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:47.718 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:47.718 true 00:07:47.718 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:47.718 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.978 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.240 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:48.240 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:48.240 true 00:07:48.240 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:48.240 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.501 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.762 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:48.762 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:48.762 true 00:07:48.762 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:48.762 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.023 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.284 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:49.284 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:49.284 true 00:07:49.546 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:49.546 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.546 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.807 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:49.807 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:50.068 true 00:07:50.068 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:50.068 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.068 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.329 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:50.329 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:50.590 true 00:07:50.590 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:50.590 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.852 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.852 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:50.852 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:51.113 true 00:07:51.113 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:51.113 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.374 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.374 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:51.374 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:51.634 true 00:07:51.634 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:51.634 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.894 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.894 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:51.894 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:52.155 true 00:07:52.155 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:52.155 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.415 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.676 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:52.676 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:52.676 true 00:07:52.676 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:52.676 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.936 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.198 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:53.198 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:53.198 true 00:07:53.198 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:53.198 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.458 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.719 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:53.719 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:53.719 true 00:07:53.996 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:53.996 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.996 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.319 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:54.319 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:54.319 true 00:07:54.319 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:54.319 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.607 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.905 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:54.905 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:54.905 true 00:07:54.905 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:54.905 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.165 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.427 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:55.427 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:55.427 true 00:07:55.427 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:55.427 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.688 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.949 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:55.949 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:55.949 true 00:07:55.949 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:55.949 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.233 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.494 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:56.494 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:56.494 true 00:07:56.494 09:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:56.494 09:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.755 09:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.017 09:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:57.017 09:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:57.017 true 00:07:57.278 09:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:57.278 09:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.278 09:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.541 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:57.541 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:57.802 true 00:07:57.802 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:57.802 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.802 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.063 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:58.063 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:58.324 true 00:07:58.324 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:58.324 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.324 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.598 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:58.598 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:58.862 true 00:07:58.862 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:58.862 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.862 Initializing NVMe Controllers 00:07:58.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:58.862 Controller IO queue size 128, less than required. 00:07:58.862 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:58.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:58.862 Initialization complete. Launching workers. 00:07:58.862 ======================================================== 00:07:58.862 Latency(us) 00:07:58.862 Device Information : IOPS MiB/s Average min max 00:07:58.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31096.00 15.18 4116.19 1126.27 8119.75 00:07:58.862 ======================================================== 00:07:58.862 Total : 31096.00 15.18 4116.19 1126.27 8119.75 00:07:58.862 00:07:58.862 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.123 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:07:59.123 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:07:59.389 true 00:07:59.389 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3146117 00:07:59.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3146117) - No such process 00:07:59.389 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3146117 00:07:59.389 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.651 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.651 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:59.651 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:59.651 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:59.651 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.651 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:59.911 null0 00:07:59.911 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.911 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.911 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:59.911 null1 00:08:00.170 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.170 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.170 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:00.170 null2 00:08:00.170 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.170 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.170 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:00.431 null3 00:08:00.431 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.431 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.431 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:00.691 null4 00:08:00.691 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.691 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.691 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:00.691 null5 00:08:00.691 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.691 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.691 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:00.953 null6 00:08:00.953 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.953 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.953 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:01.215 null7 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:01.215 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3152769 3152771 3152774 3152777 3152780 3152783 3152786 3152789 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.216 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.477 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.477 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.477 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.477 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.477 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.477 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.477 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.737 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.737 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.737 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.737 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.737 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.737 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.737 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.737 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.998 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.260 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.260 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.260 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.260 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.260 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.260 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.260 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.260 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.261 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.521 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.521 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.521 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.521 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.522 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.522 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.522 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.522 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.522 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.783 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.044 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.044 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.044 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.044 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.044 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.044 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.044 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.044 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.044 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.045 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.305 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.566 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.566 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.566 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.566 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.566 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.566 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.566 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.826 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.086 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.348 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.348 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.609 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.871 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:05.133 rmmod nvme_tcp 00:08:05.133 rmmod nvme_fabrics 00:08:05.133 rmmod nvme_keyring 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3145695 ']' 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3145695 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' -z 3145695 ']' 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # kill -0 3145695 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # uname 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:08:05.133 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3145695 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3145695' 00:08:05.394 killing process with pid 3145695 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # kill 3145695 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@977 -- # wait 3145695 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.394 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.947 00:08:07.947 real 0m49.510s 00:08:07.947 user 3m21.087s 00:08:07.947 sys 0m17.710s 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # xtrace_disable 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 ************************************ 00:08:07.947 END TEST nvmf_ns_hotplug_stress 00:08:07.947 ************************************ 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 ************************************ 00:08:07.947 START TEST nvmf_delete_subsystem 00:08:07.947 ************************************ 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:07.947 * Looking for test storage... 00:08:07.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1626 -- # lcov --version 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:08:07.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.947 --rc genhtml_branch_coverage=1 00:08:07.947 --rc genhtml_function_coverage=1 00:08:07.947 --rc genhtml_legend=1 00:08:07.947 --rc geninfo_all_blocks=1 00:08:07.947 --rc geninfo_unexecuted_blocks=1 00:08:07.947 00:08:07.947 ' 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:08:07.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.947 --rc genhtml_branch_coverage=1 00:08:07.947 --rc genhtml_function_coverage=1 00:08:07.947 --rc genhtml_legend=1 00:08:07.947 --rc geninfo_all_blocks=1 00:08:07.947 --rc geninfo_unexecuted_blocks=1 00:08:07.947 00:08:07.947 ' 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:08:07.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.947 --rc genhtml_branch_coverage=1 00:08:07.947 --rc genhtml_function_coverage=1 00:08:07.947 --rc genhtml_legend=1 00:08:07.947 --rc geninfo_all_blocks=1 00:08:07.947 --rc geninfo_unexecuted_blocks=1 00:08:07.947 00:08:07.947 ' 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:08:07.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.947 --rc genhtml_branch_coverage=1 00:08:07.947 --rc genhtml_function_coverage=1 00:08:07.947 --rc genhtml_legend=1 00:08:07.947 --rc geninfo_all_blocks=1 00:08:07.947 --rc geninfo_unexecuted_blocks=1 00:08:07.947 00:08:07.947 ' 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.947 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.948 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:16.093 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:16.094 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:16.094 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:16.094 Found net devices under 0000:31:00.0: cvl_0_0 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:16.094 Found net devices under 0000:31:00.1: cvl_0_1 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.094 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.095 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:08:16.095 00:08:16.095 --- 10.0.0.2 ping statistics --- 00:08:16.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.095 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:08:16.095 00:08:16.095 --- 10.0.0.1 ping statistics --- 00:08:16.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.095 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3158262 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3158262 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # '[' -z 3158262 ']' 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local max_retries=100 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@843 -- # xtrace_disable 00:08:16.095 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.095 [2024-10-07 09:29:15.176608] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:08:16.095 [2024-10-07 09:29:15.176704] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.095 [2024-10-07 09:29:15.268221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.095 [2024-10-07 09:29:15.362869] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.095 [2024-10-07 09:29:15.362930] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.095 [2024-10-07 09:29:15.362939] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.095 [2024-10-07 09:29:15.362947] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.095 [2024-10-07 09:29:15.362953] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.095 [2024-10-07 09:29:15.364105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.095 [2024-10-07 09:29:15.364106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.356 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:08:16.356 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@867 -- # return 0 00:08:16.356 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:16.356 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@733 -- # xtrace_disable 00:08:16.356 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.618 [2024-10-07 09:29:16.050654] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.618 [2024-10-07 09:29:16.074987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.618 NULL1 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.618 Delay0 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3158319 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:16.618 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:16.618 [2024-10-07 09:29:16.192025] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:18.533 09:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.533 09:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:18.533 09:29:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 [2024-10-07 09:29:18.293032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d356c0 is same with the state(6) to be set 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 [2024-10-07 09:29:18.293513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d351b0 is same with the state(6) to be set 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 starting I/O failed: -6 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 [2024-10-07 09:29:18.294482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec04000c00 is same with the state(6) to be set 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:18.794 Read completed with error (sct=0, sc=8) 00:08:18.794 Write completed with error (sct=0, sc=8) 00:08:19.734 [2024-10-07 09:29:19.251449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d366b0 is same with the state(6) to be set 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 [2024-10-07 09:29:19.296443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34fd0 is same with the state(6) to be set 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 [2024-10-07 09:29:19.297194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d35390 is same with the state(6) to be set 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Write completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.734 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 [2024-10-07 09:29:19.297530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec0400cfe0 is same with the state(6) to be set 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Write completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 Read completed with error (sct=0, sc=8) 00:08:19.735 [2024-10-07 09:29:19.297649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec0400d780 is same with the state(6) to be set 00:08:19.735 Initializing NVMe Controllers 00:08:19.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:19.735 Controller IO queue size 128, less than required. 00:08:19.735 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:19.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:19.735 Initialization complete. Launching workers. 00:08:19.735 ======================================================== 00:08:19.735 Latency(us) 00:08:19.735 Device Information : IOPS MiB/s Average min max 00:08:19.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.47 0.08 934028.45 390.73 2003854.50 00:08:19.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.52 0.08 980399.20 310.23 2003735.47 00:08:19.735 ======================================================== 00:08:19.735 Total : 327.99 0.16 956581.50 310.23 2003854.50 00:08:19.735 00:08:19.735 [2024-10-07 09:29:19.298164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d366b0 (9): Bad file descriptor 00:08:19.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:19.735 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:19.735 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:19.735 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3158319 00:08:19.735 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:20.323 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:20.323 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3158319 00:08:20.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3158319) - No such process 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3158319 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # local es=0 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # valid_exec_arg wait 3158319 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # local arg=wait 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # type -t wait 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@656 -- # wait 3158319 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@656 -- # es=1 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.324 [2024-10-07 09:29:19.828267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3159134 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3159134 00:08:20.324 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:20.324 [2024-10-07 09:29:19.916559] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:20.895 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.895 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3159134 00:08:20.895 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.466 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.466 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3159134 00:08:21.466 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.726 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.726 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3159134 00:08:21.726 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.297 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.297 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3159134 00:08:22.297 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.875 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.875 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3159134 00:08:22.875 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.449 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.449 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3159134 00:08:23.449 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.449 Initializing NVMe Controllers 00:08:23.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:23.449 Controller IO queue size 128, less than required. 00:08:23.449 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:23.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:23.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:23.449 Initialization complete. Launching workers. 00:08:23.449 ======================================================== 00:08:23.449 Latency(us) 00:08:23.449 Device Information : IOPS MiB/s Average min max 00:08:23.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002299.89 1000112.37 1005329.45 00:08:23.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003121.52 1000208.96 1008387.34 00:08:23.449 ======================================================== 00:08:23.449 Total : 256.00 0.12 1002710.70 1000112.37 1008387.34 00:08:23.449 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3159134 00:08:24.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3159134) - No such process 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3159134 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:24.023 rmmod nvme_tcp 00:08:24.023 rmmod nvme_fabrics 00:08:24.023 rmmod nvme_keyring 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3158262 ']' 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3158262 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' -z 3158262 ']' 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # kill -0 3158262 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # uname 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3158262 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3158262' 00:08:24.023 killing process with pid 3158262 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # kill 3158262 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@977 -- # wait 3158262 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.023 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.570 00:08:26.570 real 0m18.601s 00:08:26.570 user 0m30.718s 00:08:26.570 sys 0m6.974s 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # xtrace_disable 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.570 ************************************ 00:08:26.570 END TEST nvmf_delete_subsystem 00:08:26.570 ************************************ 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.570 ************************************ 00:08:26.570 START TEST nvmf_host_management 00:08:26.570 ************************************ 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:26.570 * Looking for test storage... 00:08:26.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1626 -- # lcov --version 00:08:26.570 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:08:26.570 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:08:26.570 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.570 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.570 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.570 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.570 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.570 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.570 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:08:26.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.571 --rc genhtml_branch_coverage=1 00:08:26.571 --rc genhtml_function_coverage=1 00:08:26.571 --rc genhtml_legend=1 00:08:26.571 --rc geninfo_all_blocks=1 00:08:26.571 --rc geninfo_unexecuted_blocks=1 00:08:26.571 00:08:26.571 ' 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:08:26.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.571 --rc genhtml_branch_coverage=1 00:08:26.571 --rc genhtml_function_coverage=1 00:08:26.571 --rc genhtml_legend=1 00:08:26.571 --rc geninfo_all_blocks=1 00:08:26.571 --rc geninfo_unexecuted_blocks=1 00:08:26.571 00:08:26.571 ' 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:08:26.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.571 --rc genhtml_branch_coverage=1 00:08:26.571 --rc genhtml_function_coverage=1 00:08:26.571 --rc genhtml_legend=1 00:08:26.571 --rc geninfo_all_blocks=1 00:08:26.571 --rc geninfo_unexecuted_blocks=1 00:08:26.571 00:08:26.571 ' 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:08:26.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.571 --rc genhtml_branch_coverage=1 00:08:26.571 --rc genhtml_function_coverage=1 00:08:26.571 --rc genhtml_legend=1 00:08:26.571 --rc geninfo_all_blocks=1 00:08:26.571 --rc geninfo_unexecuted_blocks=1 00:08:26.571 00:08:26.571 ' 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.571 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.572 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:34.747 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:34.747 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:34.748 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:34.748 Found net devices under 0000:31:00.0: cvl_0_0 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:34.748 Found net devices under 0000:31:00.1: cvl_0_1 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:08:34.748 00:08:34.748 --- 10.0.0.2 ping statistics --- 00:08:34.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.748 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:08:34.748 00:08:34.748 --- 10.0.0.1 ping statistics --- 00:08:34.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.748 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3164306 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3164306 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # '[' -z 3164306 ']' 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local max_retries=100 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@843 -- # xtrace_disable 00:08:34.748 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 [2024-10-07 09:29:33.917361] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:08:34.748 [2024-10-07 09:29:33.917424] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.748 [2024-10-07 09:29:34.007437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.748 [2024-10-07 09:29:34.101828] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.748 [2024-10-07 09:29:34.101892] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.748 [2024-10-07 09:29:34.101901] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.748 [2024-10-07 09:29:34.101908] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.748 [2024-10-07 09:29:34.101914] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.748 [2024-10-07 09:29:34.104002] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.748 [2024-10-07 09:29:34.104165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.749 [2024-10-07 09:29:34.104330] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:34.749 [2024-10-07 09:29:34.104332] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@867 -- # return 0 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@733 -- # xtrace_disable 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.322 [2024-10-07 09:29:34.796502] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.322 Malloc0 00:08:35.322 [2024-10-07 09:29:34.865893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@733 -- # xtrace_disable 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3164446 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3164446 /var/tmp/bdevperf.sock 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # '[' -z 3164446 ']' 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local max_retries=100 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:35.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@843 -- # xtrace_disable 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:35.322 { 00:08:35.322 "params": { 00:08:35.322 "name": "Nvme$subsystem", 00:08:35.322 "trtype": "$TEST_TRANSPORT", 00:08:35.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.322 "adrfam": "ipv4", 00:08:35.322 "trsvcid": "$NVMF_PORT", 00:08:35.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.322 "hdgst": ${hdgst:-false}, 00:08:35.322 "ddgst": ${ddgst:-false} 00:08:35.322 }, 00:08:35.322 "method": "bdev_nvme_attach_controller" 00:08:35.322 } 00:08:35.322 EOF 00:08:35.322 )") 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:35.322 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:35.322 "params": { 00:08:35.322 "name": "Nvme0", 00:08:35.322 "trtype": "tcp", 00:08:35.322 "traddr": "10.0.0.2", 00:08:35.322 "adrfam": "ipv4", 00:08:35.322 "trsvcid": "4420", 00:08:35.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:35.322 "hdgst": false, 00:08:35.322 "ddgst": false 00:08:35.322 }, 00:08:35.322 "method": "bdev_nvme_attach_controller" 00:08:35.322 }' 00:08:35.583 [2024-10-07 09:29:34.986632] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:08:35.584 [2024-10-07 09:29:34.986705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164446 ] 00:08:35.584 [2024-10-07 09:29:35.071317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.584 [2024-10-07 09:29:35.168211] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.846 Running I/O for 10 seconds... 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@867 -- # return 0 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=590 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 590 -ge 100 ']' 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:36.421 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.421 [2024-10-07 09:29:35.869664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.421 [2024-10-07 09:29:35.869763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.421 [2024-10-07 09:29:35.869773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.421 [2024-10-07 09:29:35.869789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.421 [2024-10-07 09:29:35.869796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.421 [2024-10-07 09:29:35.869803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.869809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.869816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.869823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.869830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.869836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.869843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.869850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.869857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.869863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.869870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e40870 is same with the state(6) to be set 00:08:36.422 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:36.422 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:36.422 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:08:36.422 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.422 [2024-10-07 09:29:35.883756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:36.422 [2024-10-07 09:29:35.883810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.883822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:36.422 [2024-10-07 09:29:35.883830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.883838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:36.422 [2024-10-07 09:29:35.883847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.883855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:36.422 [2024-10-07 09:29:35.883863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.883871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce92a0 is same with the state(6) to be set 00:08:36.422 [2024-10-07 09:29:35.884513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.422 [2024-10-07 09:29:35.884874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.422 [2024-10-07 09:29:35.884883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.884891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.884901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.884908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.884918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.884925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.884935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.884943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.884953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.884960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.884969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.884977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.884986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.884996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.423 [2024-10-07 09:29:35.885415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.423 [2024-10-07 09:29:35.885422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.424 [2024-10-07 09:29:35.885672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.424 [2024-10-07 09:29:35.885750] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf01f60 was disconnected and freed. reset controller. 00:08:36.424 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:08:36.424 [2024-10-07 09:29:35.886946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:36.424 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:36.424 task offset: 90112 on job bdev=Nvme0n1 fails 00:08:36.424 00:08:36.424 Latency(us) 00:08:36.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.424 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:36.424 Job: Nvme0n1 ended in about 0.50 seconds with error 00:08:36.424 Verification LBA range: start 0x0 length 0x400 00:08:36.424 Nvme0n1 : 0.50 1402.87 87.68 127.53 0.00 40693.00 1706.67 36263.25 00:08:36.424 =================================================================================================================== 00:08:36.424 Total : 1402.87 87.68 127.53 0.00 40693.00 1706.67 36263.25 00:08:36.424 [2024-10-07 09:29:35.889147] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.424 [2024-10-07 09:29:35.889183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce92a0 (9): Bad file descriptor 00:08:36.424 [2024-10-07 09:29:35.945145] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3164446 00:08:37.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3164446) - No such process 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:37.369 { 00:08:37.369 "params": { 00:08:37.369 "name": "Nvme$subsystem", 00:08:37.369 "trtype": "$TEST_TRANSPORT", 00:08:37.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.369 "adrfam": "ipv4", 00:08:37.369 "trsvcid": "$NVMF_PORT", 00:08:37.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.369 "hdgst": ${hdgst:-false}, 00:08:37.369 "ddgst": ${ddgst:-false} 00:08:37.369 }, 00:08:37.369 "method": "bdev_nvme_attach_controller" 00:08:37.369 } 00:08:37.369 EOF 00:08:37.369 )") 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:37.369 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:37.369 "params": { 00:08:37.369 "name": "Nvme0", 00:08:37.369 "trtype": "tcp", 00:08:37.369 "traddr": "10.0.0.2", 00:08:37.369 "adrfam": "ipv4", 00:08:37.369 "trsvcid": "4420", 00:08:37.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:37.369 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:37.369 "hdgst": false, 00:08:37.369 "ddgst": false 00:08:37.369 }, 00:08:37.369 "method": "bdev_nvme_attach_controller" 00:08:37.369 }' 00:08:37.369 [2024-10-07 09:29:36.947568] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:08:37.369 [2024-10-07 09:29:36.947627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164841 ] 00:08:37.369 [2024-10-07 09:29:37.027762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.631 [2024-10-07 09:29:37.091724] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.891 Running I/O for 1 seconds... 00:08:38.831 1856.00 IOPS, 116.00 MiB/s 00:08:38.831 Latency(us) 00:08:38.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.831 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:38.831 Verification LBA range: start 0x0 length 0x400 00:08:38.831 Nvme0n1 : 1.02 1889.69 118.11 0.00 0.00 33241.73 6007.47 29491.20 00:08:38.831 =================================================================================================================== 00:08:38.831 Total : 1889.69 118.11 0.00 0.00 33241.73 6007.47 29491.20 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.091 rmmod nvme_tcp 00:08:39.091 rmmod nvme_fabrics 00:08:39.091 rmmod nvme_keyring 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3164306 ']' 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3164306 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' -z 3164306 ']' 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # kill -0 3164306 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # uname 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3164306 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3164306' 00:08:39.091 killing process with pid 3164306 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # kill 3164306 00:08:39.091 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@977 -- # wait 3164306 00:08:39.352 [2024-10-07 09:29:38.803700] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.352 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.265 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.265 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:41.265 00:08:41.265 real 0m15.110s 00:08:41.265 user 0m23.798s 00:08:41.265 sys 0m7.054s 00:08:41.265 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # xtrace_disable 00:08:41.265 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.265 ************************************ 00:08:41.265 END TEST nvmf_host_management 00:08:41.265 ************************************ 00:08:41.525 09:29:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:41.525 09:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:08:41.525 09:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:08:41.525 09:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.525 ************************************ 00:08:41.525 START TEST nvmf_lvol 00:08:41.525 ************************************ 00:08:41.525 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:41.525 * Looking for test storage... 00:08:41.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.525 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:08:41.525 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1626 -- # lcov --version 00:08:41.525 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.786 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:08:41.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.787 --rc genhtml_branch_coverage=1 00:08:41.787 --rc genhtml_function_coverage=1 00:08:41.787 --rc genhtml_legend=1 00:08:41.787 --rc geninfo_all_blocks=1 00:08:41.787 --rc geninfo_unexecuted_blocks=1 00:08:41.787 00:08:41.787 ' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:08:41.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.787 --rc genhtml_branch_coverage=1 00:08:41.787 --rc genhtml_function_coverage=1 00:08:41.787 --rc genhtml_legend=1 00:08:41.787 --rc geninfo_all_blocks=1 00:08:41.787 --rc geninfo_unexecuted_blocks=1 00:08:41.787 00:08:41.787 ' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:08:41.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.787 --rc genhtml_branch_coverage=1 00:08:41.787 --rc genhtml_function_coverage=1 00:08:41.787 --rc genhtml_legend=1 00:08:41.787 --rc geninfo_all_blocks=1 00:08:41.787 --rc geninfo_unexecuted_blocks=1 00:08:41.787 00:08:41.787 ' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:08:41.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.787 --rc genhtml_branch_coverage=1 00:08:41.787 --rc genhtml_function_coverage=1 00:08:41.787 --rc genhtml_legend=1 00:08:41.787 --rc geninfo_all_blocks=1 00:08:41.787 --rc geninfo_unexecuted_blocks=1 00:08:41.787 00:08:41.787 ' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.787 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.930 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.930 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.930 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.930 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.930 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:49.931 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:49.931 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:49.931 Found net devices under 0000:31:00.0: cvl_0_0 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:49.931 Found net devices under 0000:31:00.1: cvl_0_1 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:08:49.931 00:08:49.931 --- 10.0.0.2 ping statistics --- 00:08:49.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.931 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:08:49.931 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:08:49.931 00:08:49.931 --- 10.0.0.1 ping statistics --- 00:08:49.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.931 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:08:49.932 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.932 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:08:49.932 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:49.932 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.932 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:49.932 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:49.932 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.932 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:49.932 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:49.932 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3169579 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3169579 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # '[' -z 3169579 ']' 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local max_retries=100 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@843 -- # xtrace_disable 00:08:49.932 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.932 [2024-10-07 09:29:49.065010] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:08:49.932 [2024-10-07 09:29:49.065072] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.932 [2024-10-07 09:29:49.154645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:49.932 [2024-10-07 09:29:49.251084] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.932 [2024-10-07 09:29:49.251149] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.932 [2024-10-07 09:29:49.251158] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.932 [2024-10-07 09:29:49.251165] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.932 [2024-10-07 09:29:49.251172] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.932 [2024-10-07 09:29:49.252696] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.932 [2024-10-07 09:29:49.252869] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.932 [2024-10-07 09:29:49.252870] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.505 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:08:50.505 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@867 -- # return 0 00:08:50.505 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:50.505 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@733 -- # xtrace_disable 00:08:50.505 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.505 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.505 09:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.505 [2024-10-07 09:29:50.109156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.505 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.765 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:50.765 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.026 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:51.026 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:51.286 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:51.547 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5fc5cd9d-fdd8-41ef-aa6a-cc652048b89e 00:08:51.547 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5fc5cd9d-fdd8-41ef-aa6a-cc652048b89e lvol 20 00:08:51.808 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=40a9fcb7-bb83-42aa-bb8d-cd24812bd325 00:08:51.808 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:51.808 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40a9fcb7-bb83-42aa-bb8d-cd24812bd325 00:08:52.069 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:52.329 [2024-10-07 09:29:51.760836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.329 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:52.590 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3170267 00:08:52.590 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:52.590 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:53.529 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 40a9fcb7-bb83-42aa-bb8d-cd24812bd325 MY_SNAPSHOT 00:08:53.789 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7cf10393-ccd9-49d0-ad64-74cf6e55c94c 00:08:53.789 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 40a9fcb7-bb83-42aa-bb8d-cd24812bd325 30 00:08:53.789 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7cf10393-ccd9-49d0-ad64-74cf6e55c94c MY_CLONE 00:08:54.049 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7e9d4be6-2509-4634-bb5d-2ad50c8960f5 00:08:54.049 09:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7e9d4be6-2509-4634-bb5d-2ad50c8960f5 00:08:54.622 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3170267 00:09:04.624 Initializing NVMe Controllers 00:09:04.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:04.624 Controller IO queue size 128, less than required. 00:09:04.624 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:04.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:04.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:04.624 Initialization complete. Launching workers. 00:09:04.624 ======================================================== 00:09:04.624 Latency(us) 00:09:04.624 Device Information : IOPS MiB/s Average min max 00:09:04.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16139.50 63.04 7931.59 1490.46 54371.97 00:09:04.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17048.60 66.60 7509.94 280.33 57873.79 00:09:04.624 ======================================================== 00:09:04.624 Total : 33188.10 129.64 7714.99 280.33 57873.79 00:09:04.624 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 40a9fcb7-bb83-42aa-bb8d-cd24812bd325 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5fc5cd9d-fdd8-41ef-aa6a-cc652048b89e 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.624 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.624 rmmod nvme_tcp 00:09:04.624 rmmod nvme_fabrics 00:09:04.624 rmmod nvme_keyring 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3169579 ']' 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3169579 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' -z 3169579 ']' 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # kill -0 3169579 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # uname 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3169579 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3169579' 00:09:04.624 killing process with pid 3169579 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # kill 3169579 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@977 -- # wait 3169579 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.624 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.625 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.625 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.012 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:06.012 00:09:06.012 real 0m24.361s 00:09:06.012 user 1m5.187s 00:09:06.013 sys 0m8.865s 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # xtrace_disable 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:06.013 ************************************ 00:09:06.013 END TEST nvmf_lvol 00:09:06.013 ************************************ 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.013 ************************************ 00:09:06.013 START TEST nvmf_lvs_grow 00:09:06.013 ************************************ 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:06.013 * Looking for test storage... 00:09:06.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1626 -- # lcov --version 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:09:06.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.013 --rc genhtml_branch_coverage=1 00:09:06.013 --rc genhtml_function_coverage=1 00:09:06.013 --rc genhtml_legend=1 00:09:06.013 --rc geninfo_all_blocks=1 00:09:06.013 --rc geninfo_unexecuted_blocks=1 00:09:06.013 00:09:06.013 ' 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:09:06.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.013 --rc genhtml_branch_coverage=1 00:09:06.013 --rc genhtml_function_coverage=1 00:09:06.013 --rc genhtml_legend=1 00:09:06.013 --rc geninfo_all_blocks=1 00:09:06.013 --rc geninfo_unexecuted_blocks=1 00:09:06.013 00:09:06.013 ' 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:09:06.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.013 --rc genhtml_branch_coverage=1 00:09:06.013 --rc genhtml_function_coverage=1 00:09:06.013 --rc genhtml_legend=1 00:09:06.013 --rc geninfo_all_blocks=1 00:09:06.013 --rc geninfo_unexecuted_blocks=1 00:09:06.013 00:09:06.013 ' 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:09:06.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.013 --rc genhtml_branch_coverage=1 00:09:06.013 --rc genhtml_function_coverage=1 00:09:06.013 --rc genhtml_legend=1 00:09:06.013 --rc geninfo_all_blocks=1 00:09:06.013 --rc geninfo_unexecuted_blocks=1 00:09:06.013 00:09:06.013 ' 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.013 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.275 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.428 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:14.429 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:14.429 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:14.429 Found net devices under 0000:31:00.0: cvl_0_0 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:14.429 Found net devices under 0000:31:00.1: cvl_0_1 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:09:14.429 00:09:14.429 --- 10.0.0.2 ping statistics --- 00:09:14.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.429 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:09:14.429 00:09:14.429 --- 10.0.0.1 ping statistics --- 00:09:14.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.429 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:09:14.429 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3177509 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3177509 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # '[' -z 3177509 ']' 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local max_retries=100 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@843 -- # xtrace_disable 00:09:14.430 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.430 [2024-10-07 09:30:13.633501] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:09:14.430 [2024-10-07 09:30:13.633587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.430 [2024-10-07 09:30:13.722443] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.430 [2024-10-07 09:30:13.818328] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.430 [2024-10-07 09:30:13.818383] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.430 [2024-10-07 09:30:13.818394] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.430 [2024-10-07 09:30:13.818401] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.430 [2024-10-07 09:30:13.818407] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.430 [2024-10-07 09:30:13.819189] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.002 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:09:15.002 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@867 -- # return 0 00:09:15.002 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:15.002 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@733 -- # xtrace_disable 00:09:15.002 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:15.002 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.002 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:15.263 [2024-10-07 09:30:14.666421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1110 -- # xtrace_disable 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:15.263 ************************************ 00:09:15.263 START TEST lvs_grow_clean 00:09:15.263 ************************************ 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # lvs_grow 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:15.263 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:15.524 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:15.524 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:15.524 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:15.524 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:15.524 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:15.785 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:15.785 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:15.785 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u adc01d75-541c-4c6c-92db-d8c2c986f3de lvol 150 00:09:16.046 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d75ec75d-83a7-4e1f-8d54-eac4143f5e68 00:09:16.046 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.046 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:16.307 [2024-10-07 09:30:15.710100] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:16.307 [2024-10-07 09:30:15.710173] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:16.307 true 00:09:16.307 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:16.307 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:16.307 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:16.307 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:16.569 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d75ec75d-83a7-4e1f-8d54-eac4143f5e68 00:09:16.830 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:16.830 [2024-10-07 09:30:16.440411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.830 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:17.090 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3178010 00:09:17.090 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:17.090 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:17.090 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3178010 /var/tmp/bdevperf.sock 00:09:17.090 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # '[' -z 3178010 ']' 00:09:17.090 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:17.090 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local max_retries=100 00:09:17.090 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:17.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:17.090 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@843 -- # xtrace_disable 00:09:17.090 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:17.091 [2024-10-07 09:30:16.691861] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:09:17.091 [2024-10-07 09:30:16.691930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3178010 ] 00:09:17.351 [2024-10-07 09:30:16.772890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.351 [2024-10-07 09:30:16.869091] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.923 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:09:17.923 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@867 -- # return 0 00:09:17.923 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:18.198 Nvme0n1 00:09:18.198 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:18.459 [ 00:09:18.459 { 00:09:18.459 "name": "Nvme0n1", 00:09:18.459 "aliases": [ 00:09:18.459 "d75ec75d-83a7-4e1f-8d54-eac4143f5e68" 00:09:18.459 ], 00:09:18.459 "product_name": "NVMe disk", 00:09:18.459 "block_size": 4096, 00:09:18.459 "num_blocks": 38912, 00:09:18.459 "uuid": "d75ec75d-83a7-4e1f-8d54-eac4143f5e68", 00:09:18.459 "numa_id": 0, 00:09:18.459 "assigned_rate_limits": { 00:09:18.459 "rw_ios_per_sec": 0, 00:09:18.459 "rw_mbytes_per_sec": 0, 00:09:18.459 "r_mbytes_per_sec": 0, 00:09:18.459 "w_mbytes_per_sec": 0 00:09:18.459 }, 00:09:18.459 "claimed": false, 00:09:18.459 "zoned": false, 00:09:18.459 "supported_io_types": { 00:09:18.459 "read": true, 00:09:18.459 "write": true, 00:09:18.459 "unmap": true, 00:09:18.459 "flush": true, 00:09:18.459 "reset": true, 00:09:18.459 "nvme_admin": true, 00:09:18.459 "nvme_io": true, 00:09:18.459 "nvme_io_md": false, 00:09:18.459 "write_zeroes": true, 00:09:18.459 "zcopy": false, 00:09:18.459 "get_zone_info": false, 00:09:18.459 "zone_management": false, 00:09:18.459 "zone_append": false, 00:09:18.459 "compare": true, 00:09:18.459 "compare_and_write": true, 00:09:18.459 "abort": true, 00:09:18.459 "seek_hole": false, 00:09:18.459 "seek_data": false, 00:09:18.459 "copy": true, 00:09:18.459 "nvme_iov_md": false 00:09:18.459 }, 00:09:18.459 "memory_domains": [ 00:09:18.459 { 00:09:18.459 "dma_device_id": "system", 00:09:18.459 "dma_device_type": 1 00:09:18.459 } 00:09:18.459 ], 00:09:18.459 "driver_specific": { 00:09:18.459 "nvme": [ 00:09:18.459 { 00:09:18.459 "trid": { 00:09:18.459 "trtype": "TCP", 00:09:18.459 "adrfam": "IPv4", 00:09:18.459 "traddr": "10.0.0.2", 00:09:18.459 "trsvcid": "4420", 00:09:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:18.459 }, 00:09:18.459 "ctrlr_data": { 00:09:18.459 "cntlid": 1, 00:09:18.459 "vendor_id": "0x8086", 00:09:18.459 "model_number": "SPDK bdev Controller", 00:09:18.459 "serial_number": "SPDK0", 00:09:18.459 "firmware_revision": "25.01", 00:09:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:18.459 "oacs": { 00:09:18.459 "security": 0, 00:09:18.459 "format": 0, 00:09:18.459 "firmware": 0, 00:09:18.459 "ns_manage": 0 00:09:18.459 }, 00:09:18.459 "multi_ctrlr": true, 00:09:18.459 "ana_reporting": false 00:09:18.459 }, 00:09:18.459 "vs": { 00:09:18.459 "nvme_version": "1.3" 00:09:18.460 }, 00:09:18.460 "ns_data": { 00:09:18.460 "id": 1, 00:09:18.460 "can_share": true 00:09:18.460 } 00:09:18.460 } 00:09:18.460 ], 00:09:18.460 "mp_policy": "active_passive" 00:09:18.460 } 00:09:18.460 } 00:09:18.460 ] 00:09:18.460 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3178318 00:09:18.460 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:18.460 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:18.460 Running I/O for 10 seconds... 00:09:19.399 Latency(us) 00:09:19.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.399 Nvme0n1 : 1.00 23617.00 92.25 0.00 0.00 0.00 0.00 0.00 00:09:19.400 =================================================================================================================== 00:09:19.400 Total : 23617.00 92.25 0.00 0.00 0.00 0.00 0.00 00:09:19.400 00:09:20.337 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:20.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.599 Nvme0n1 : 2.00 24504.00 95.72 0.00 0.00 0.00 0.00 0.00 00:09:20.599 =================================================================================================================== 00:09:20.599 Total : 24504.00 95.72 0.00 0.00 0.00 0.00 0.00 00:09:20.599 00:09:20.599 true 00:09:20.599 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:20.599 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:20.859 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:20.859 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:20.859 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3178318 00:09:21.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.428 Nvme0n1 : 3.00 24815.33 96.93 0.00 0.00 0.00 0.00 0.00 00:09:21.429 =================================================================================================================== 00:09:21.429 Total : 24815.33 96.93 0.00 0.00 0.00 0.00 0.00 00:09:21.429 00:09:22.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.509 Nvme0n1 : 4.00 24995.50 97.64 0.00 0.00 0.00 0.00 0.00 00:09:22.509 =================================================================================================================== 00:09:22.509 Total : 24995.50 97.64 0.00 0.00 0.00 0.00 0.00 00:09:22.509 00:09:23.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.551 Nvme0n1 : 5.00 25103.00 98.06 0.00 0.00 0.00 0.00 0.00 00:09:23.551 =================================================================================================================== 00:09:23.551 Total : 25103.00 98.06 0.00 0.00 0.00 0.00 0.00 00:09:23.551 00:09:24.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.492 Nvme0n1 : 6.00 25175.00 98.34 0.00 0.00 0.00 0.00 0.00 00:09:24.492 =================================================================================================================== 00:09:24.492 Total : 25175.00 98.34 0.00 0.00 0.00 0.00 0.00 00:09:24.492 00:09:25.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.432 Nvme0n1 : 7.00 25235.43 98.58 0.00 0.00 0.00 0.00 0.00 00:09:25.432 =================================================================================================================== 00:09:25.432 Total : 25235.43 98.58 0.00 0.00 0.00 0.00 0.00 00:09:25.432 00:09:26.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.815 Nvme0n1 : 8.00 25280.75 98.75 0.00 0.00 0.00 0.00 0.00 00:09:26.815 =================================================================================================================== 00:09:26.815 Total : 25280.75 98.75 0.00 0.00 0.00 0.00 0.00 00:09:26.815 00:09:27.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.756 Nvme0n1 : 9.00 25316.11 98.89 0.00 0.00 0.00 0.00 0.00 00:09:27.756 =================================================================================================================== 00:09:27.756 Total : 25316.11 98.89 0.00 0.00 0.00 0.00 0.00 00:09:27.756 00:09:28.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.697 Nvme0n1 : 10.00 25344.40 99.00 0.00 0.00 0.00 0.00 0.00 00:09:28.697 =================================================================================================================== 00:09:28.697 Total : 25344.40 99.00 0.00 0.00 0.00 0.00 0.00 00:09:28.697 00:09:28.697 00:09:28.697 Latency(us) 00:09:28.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.697 Nvme0n1 : 10.00 25343.28 99.00 0.00 0.00 5047.05 2457.60 13216.43 00:09:28.697 =================================================================================================================== 00:09:28.697 Total : 25343.28 99.00 0.00 0.00 5047.05 2457.60 13216.43 00:09:28.697 { 00:09:28.697 "results": [ 00:09:28.697 { 00:09:28.697 "job": "Nvme0n1", 00:09:28.697 "core_mask": "0x2", 00:09:28.697 "workload": "randwrite", 00:09:28.697 "status": "finished", 00:09:28.697 "queue_depth": 128, 00:09:28.697 "io_size": 4096, 00:09:28.697 "runtime": 10.003005, 00:09:28.697 "iops": 25343.28434305491, 00:09:28.697 "mibps": 98.99720446505825, 00:09:28.697 "io_failed": 0, 00:09:28.697 "io_timeout": 0, 00:09:28.697 "avg_latency_us": 5047.051375690804, 00:09:28.697 "min_latency_us": 2457.6, 00:09:28.697 "max_latency_us": 13216.426666666666 00:09:28.697 } 00:09:28.697 ], 00:09:28.697 "core_count": 1 00:09:28.697 } 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3178010 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' -z 3178010 ']' 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # kill -0 3178010 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # uname 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3178010 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3178010' 00:09:28.697 killing process with pid 3178010 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # kill 3178010 00:09:28.697 Received shutdown signal, test time was about 10.000000 seconds 00:09:28.697 00:09:28.697 Latency(us) 00:09:28.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.697 =================================================================================================================== 00:09:28.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@977 -- # wait 3178010 00:09:28.697 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.957 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:29.217 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:29.217 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:29.217 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:29.217 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:29.217 09:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.477 [2024-10-07 09:30:29.008204] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # local es=0 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@647 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@647 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@647 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:29.477 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@656 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:29.737 request: 00:09:29.737 { 00:09:29.737 "uuid": "adc01d75-541c-4c6c-92db-d8c2c986f3de", 00:09:29.737 "method": "bdev_lvol_get_lvstores", 00:09:29.737 "req_id": 1 00:09:29.737 } 00:09:29.737 Got JSON-RPC error response 00:09:29.737 response: 00:09:29.737 { 00:09:29.737 "code": -19, 00:09:29.737 "message": "No such device" 00:09:29.737 } 00:09:29.737 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@656 -- # es=1 00:09:29.737 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:09:29.737 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:09:29.737 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:09:29.737 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.997 aio_bdev 00:09:29.997 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d75ec75d-83a7-4e1f-8d54-eac4143f5e68 00:09:29.997 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_name=d75ec75d-83a7-4e1f-8d54-eac4143f5e68 00:09:29.997 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:09:29.997 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local i 00:09:29.997 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:09:29.997 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:09:29.997 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:29.997 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d75ec75d-83a7-4e1f-8d54-eac4143f5e68 -t 2000 00:09:30.257 [ 00:09:30.257 { 00:09:30.257 "name": "d75ec75d-83a7-4e1f-8d54-eac4143f5e68", 00:09:30.257 "aliases": [ 00:09:30.257 "lvs/lvol" 00:09:30.257 ], 00:09:30.257 "product_name": "Logical Volume", 00:09:30.257 "block_size": 4096, 00:09:30.257 "num_blocks": 38912, 00:09:30.257 "uuid": "d75ec75d-83a7-4e1f-8d54-eac4143f5e68", 00:09:30.257 "assigned_rate_limits": { 00:09:30.257 "rw_ios_per_sec": 0, 00:09:30.257 "rw_mbytes_per_sec": 0, 00:09:30.257 "r_mbytes_per_sec": 0, 00:09:30.257 "w_mbytes_per_sec": 0 00:09:30.257 }, 00:09:30.257 "claimed": false, 00:09:30.257 "zoned": false, 00:09:30.257 "supported_io_types": { 00:09:30.257 "read": true, 00:09:30.257 "write": true, 00:09:30.257 "unmap": true, 00:09:30.257 "flush": false, 00:09:30.257 "reset": true, 00:09:30.257 "nvme_admin": false, 00:09:30.257 "nvme_io": false, 00:09:30.257 "nvme_io_md": false, 00:09:30.257 "write_zeroes": true, 00:09:30.257 "zcopy": false, 00:09:30.257 "get_zone_info": false, 00:09:30.257 "zone_management": false, 00:09:30.257 "zone_append": false, 00:09:30.257 "compare": false, 00:09:30.257 "compare_and_write": false, 00:09:30.257 "abort": false, 00:09:30.257 "seek_hole": true, 00:09:30.257 "seek_data": true, 00:09:30.257 "copy": false, 00:09:30.257 "nvme_iov_md": false 00:09:30.257 }, 00:09:30.257 "driver_specific": { 00:09:30.257 "lvol": { 00:09:30.257 "lvol_store_uuid": "adc01d75-541c-4c6c-92db-d8c2c986f3de", 00:09:30.257 "base_bdev": "aio_bdev", 00:09:30.257 "thin_provision": false, 00:09:30.257 "num_allocated_clusters": 38, 00:09:30.257 "snapshot": false, 00:09:30.257 "clone": false, 00:09:30.257 "esnap_clone": false 00:09:30.257 } 00:09:30.257 } 00:09:30.257 } 00:09:30.257 ] 00:09:30.257 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # return 0 00:09:30.257 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:30.257 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:30.518 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:30.518 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:30.518 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:30.518 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:30.518 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d75ec75d-83a7-4e1f-8d54-eac4143f5e68 00:09:30.778 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u adc01d75-541c-4c6c-92db-d8c2c986f3de 00:09:31.040 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:31.040 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:31.040 00:09:31.040 real 0m15.925s 00:09:31.040 user 0m15.564s 00:09:31.040 sys 0m1.435s 00:09:31.040 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # xtrace_disable 00:09:31.040 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:31.040 ************************************ 00:09:31.040 END TEST lvs_grow_clean 00:09:31.040 ************************************ 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1110 -- # xtrace_disable 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:31.301 ************************************ 00:09:31.301 START TEST lvs_grow_dirty 00:09:31.301 ************************************ 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # lvs_grow dirty 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:31.301 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:31.563 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:31.563 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:31.563 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:31.824 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:31.824 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:31.824 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f lvol 150 00:09:31.824 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=748f3245-d5f7-4fbd-a7dc-a3793d3ea475 00:09:31.824 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:31.824 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:32.085 [2024-10-07 09:30:31.618259] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:32.085 [2024-10-07 09:30:31.618301] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:32.085 true 00:09:32.085 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:32.085 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:32.346 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:32.346 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:32.346 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 748f3245-d5f7-4fbd-a7dc-a3793d3ea475 00:09:32.607 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:32.872 [2024-10-07 09:30:32.280170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3181349 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3181349 /var/tmp/bdevperf.sock 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # '[' -z 3181349 ']' 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local max_retries=100 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:32.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@843 -- # xtrace_disable 00:09:32.872 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.872 [2024-10-07 09:30:32.495815] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:09:32.872 [2024-10-07 09:30:32.495867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181349 ] 00:09:33.134 [2024-10-07 09:30:32.572346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.134 [2024-10-07 09:30:32.625906] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.707 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:09:33.707 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@867 -- # return 0 00:09:33.707 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:33.968 Nvme0n1 00:09:33.968 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:34.228 [ 00:09:34.228 { 00:09:34.228 "name": "Nvme0n1", 00:09:34.228 "aliases": [ 00:09:34.228 "748f3245-d5f7-4fbd-a7dc-a3793d3ea475" 00:09:34.228 ], 00:09:34.228 "product_name": "NVMe disk", 00:09:34.228 "block_size": 4096, 00:09:34.228 "num_blocks": 38912, 00:09:34.228 "uuid": "748f3245-d5f7-4fbd-a7dc-a3793d3ea475", 00:09:34.228 "numa_id": 0, 00:09:34.228 "assigned_rate_limits": { 00:09:34.228 "rw_ios_per_sec": 0, 00:09:34.228 "rw_mbytes_per_sec": 0, 00:09:34.228 "r_mbytes_per_sec": 0, 00:09:34.228 "w_mbytes_per_sec": 0 00:09:34.228 }, 00:09:34.228 "claimed": false, 00:09:34.228 "zoned": false, 00:09:34.228 "supported_io_types": { 00:09:34.228 "read": true, 00:09:34.228 "write": true, 00:09:34.228 "unmap": true, 00:09:34.228 "flush": true, 00:09:34.228 "reset": true, 00:09:34.228 "nvme_admin": true, 00:09:34.228 "nvme_io": true, 00:09:34.228 "nvme_io_md": false, 00:09:34.228 "write_zeroes": true, 00:09:34.228 "zcopy": false, 00:09:34.228 "get_zone_info": false, 00:09:34.228 "zone_management": false, 00:09:34.228 "zone_append": false, 00:09:34.228 "compare": true, 00:09:34.228 "compare_and_write": true, 00:09:34.228 "abort": true, 00:09:34.228 "seek_hole": false, 00:09:34.228 "seek_data": false, 00:09:34.228 "copy": true, 00:09:34.228 "nvme_iov_md": false 00:09:34.228 }, 00:09:34.228 "memory_domains": [ 00:09:34.228 { 00:09:34.229 "dma_device_id": "system", 00:09:34.229 "dma_device_type": 1 00:09:34.229 } 00:09:34.229 ], 00:09:34.229 "driver_specific": { 00:09:34.229 "nvme": [ 00:09:34.229 { 00:09:34.229 "trid": { 00:09:34.229 "trtype": "TCP", 00:09:34.229 "adrfam": "IPv4", 00:09:34.229 "traddr": "10.0.0.2", 00:09:34.229 "trsvcid": "4420", 00:09:34.229 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:34.229 }, 00:09:34.229 "ctrlr_data": { 00:09:34.229 "cntlid": 1, 00:09:34.229 "vendor_id": "0x8086", 00:09:34.229 "model_number": "SPDK bdev Controller", 00:09:34.229 "serial_number": "SPDK0", 00:09:34.229 "firmware_revision": "25.01", 00:09:34.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:34.229 "oacs": { 00:09:34.229 "security": 0, 00:09:34.229 "format": 0, 00:09:34.229 "firmware": 0, 00:09:34.229 "ns_manage": 0 00:09:34.229 }, 00:09:34.229 "multi_ctrlr": true, 00:09:34.229 "ana_reporting": false 00:09:34.229 }, 00:09:34.229 "vs": { 00:09:34.229 "nvme_version": "1.3" 00:09:34.229 }, 00:09:34.229 "ns_data": { 00:09:34.229 "id": 1, 00:09:34.229 "can_share": true 00:09:34.229 } 00:09:34.229 } 00:09:34.229 ], 00:09:34.229 "mp_policy": "active_passive" 00:09:34.229 } 00:09:34.229 } 00:09:34.229 ] 00:09:34.229 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3181483 00:09:34.229 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:34.229 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:34.229 Running I/O for 10 seconds... 00:09:35.173 Latency(us) 00:09:35.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.173 Nvme0n1 : 1.00 25112.00 98.09 0.00 0.00 0.00 0.00 0.00 00:09:35.173 =================================================================================================================== 00:09:35.173 Total : 25112.00 98.09 0.00 0.00 0.00 0.00 0.00 00:09:35.173 00:09:36.115 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:36.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.376 Nvme0n1 : 2.00 25257.00 98.66 0.00 0.00 0.00 0.00 0.00 00:09:36.376 =================================================================================================================== 00:09:36.376 Total : 25257.00 98.66 0.00 0.00 0.00 0.00 0.00 00:09:36.376 00:09:36.376 true 00:09:36.376 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:36.376 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:36.636 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:36.636 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:36.636 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3181483 00:09:37.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.208 Nvme0n1 : 3.00 25329.00 98.94 0.00 0.00 0.00 0.00 0.00 00:09:37.208 =================================================================================================================== 00:09:37.208 Total : 25329.00 98.94 0.00 0.00 0.00 0.00 0.00 00:09:37.208 00:09:38.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.592 Nvme0n1 : 4.00 25380.75 99.14 0.00 0.00 0.00 0.00 0.00 00:09:38.592 =================================================================================================================== 00:09:38.592 Total : 25380.75 99.14 0.00 0.00 0.00 0.00 0.00 00:09:38.592 00:09:39.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.532 Nvme0n1 : 5.00 25424.20 99.31 0.00 0.00 0.00 0.00 0.00 00:09:39.532 =================================================================================================================== 00:09:39.532 Total : 25424.20 99.31 0.00 0.00 0.00 0.00 0.00 00:09:39.532 00:09:40.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.471 Nvme0n1 : 6.00 25453.33 99.43 0.00 0.00 0.00 0.00 0.00 00:09:40.471 =================================================================================================================== 00:09:40.471 Total : 25453.33 99.43 0.00 0.00 0.00 0.00 0.00 00:09:40.471 00:09:41.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.411 Nvme0n1 : 7.00 25464.29 99.47 0.00 0.00 0.00 0.00 0.00 00:09:41.411 =================================================================================================================== 00:09:41.411 Total : 25464.29 99.47 0.00 0.00 0.00 0.00 0.00 00:09:41.411 00:09:42.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.353 Nvme0n1 : 8.00 25489.25 99.57 0.00 0.00 0.00 0.00 0.00 00:09:42.354 =================================================================================================================== 00:09:42.354 Total : 25489.25 99.57 0.00 0.00 0.00 0.00 0.00 00:09:42.354 00:09:43.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.297 Nvme0n1 : 9.00 25501.44 99.62 0.00 0.00 0.00 0.00 0.00 00:09:43.297 =================================================================================================================== 00:09:43.297 Total : 25501.44 99.62 0.00 0.00 0.00 0.00 0.00 00:09:43.297 00:09:44.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.239 Nvme0n1 : 10.00 25517.60 99.68 0.00 0.00 0.00 0.00 0.00 00:09:44.239 =================================================================================================================== 00:09:44.239 Total : 25517.60 99.68 0.00 0.00 0.00 0.00 0.00 00:09:44.239 00:09:44.239 00:09:44.239 Latency(us) 00:09:44.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.239 Nvme0n1 : 10.00 25511.85 99.66 0.00 0.00 5014.21 3085.65 10376.53 00:09:44.239 =================================================================================================================== 00:09:44.239 Total : 25511.85 99.66 0.00 0.00 5014.21 3085.65 10376.53 00:09:44.239 { 00:09:44.239 "results": [ 00:09:44.239 { 00:09:44.239 "job": "Nvme0n1", 00:09:44.239 "core_mask": "0x2", 00:09:44.239 "workload": "randwrite", 00:09:44.239 "status": "finished", 00:09:44.239 "queue_depth": 128, 00:09:44.239 "io_size": 4096, 00:09:44.239 "runtime": 10.004802, 00:09:44.239 "iops": 25511.849210009354, 00:09:44.239 "mibps": 99.65566097659904, 00:09:44.239 "io_failed": 0, 00:09:44.239 "io_timeout": 0, 00:09:44.239 "avg_latency_us": 5014.206151101639, 00:09:44.239 "min_latency_us": 3085.653333333333, 00:09:44.239 "max_latency_us": 10376.533333333333 00:09:44.239 } 00:09:44.239 ], 00:09:44.239 "core_count": 1 00:09:44.239 } 00:09:44.239 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3181349 00:09:44.239 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' -z 3181349 ']' 00:09:44.239 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # kill -0 3181349 00:09:44.239 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # uname 00:09:44.239 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:09:44.239 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3181349 00:09:44.500 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:09:44.500 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:09:44.500 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3181349' 00:09:44.500 killing process with pid 3181349 00:09:44.500 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # kill 3181349 00:09:44.500 Received shutdown signal, test time was about 10.000000 seconds 00:09:44.500 00:09:44.500 Latency(us) 00:09:44.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.500 =================================================================================================================== 00:09:44.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:44.500 09:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@977 -- # wait 3181349 00:09:44.500 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:44.761 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3177509 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3177509 00:09:45.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3177509 Killed "${NVMF_APP[@]}" "$@" 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:45.021 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.282 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3183775 00:09:45.282 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3183775 00:09:45.282 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:45.282 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # '[' -z 3183775 ']' 00:09:45.282 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.282 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local max_retries=100 00:09:45.282 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.282 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@843 -- # xtrace_disable 00:09:45.282 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.282 [2024-10-07 09:30:44.742790] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:09:45.282 [2024-10-07 09:30:44.742843] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.282 [2024-10-07 09:30:44.828575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.282 [2024-10-07 09:30:44.882021] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.282 [2024-10-07 09:30:44.882052] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.282 [2024-10-07 09:30:44.882057] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.282 [2024-10-07 09:30:44.882062] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.282 [2024-10-07 09:30:44.882066] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.282 [2024-10-07 09:30:44.882548] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@867 -- # return 0 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@733 -- # xtrace_disable 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:46.224 [2024-10-07 09:30:45.736272] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:46.224 [2024-10-07 09:30:45.736395] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:46.224 [2024-10-07 09:30:45.736417] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 748f3245-d5f7-4fbd-a7dc-a3793d3ea475 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_name=748f3245-d5f7-4fbd-a7dc-a3793d3ea475 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local i 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:09:46.224 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:46.486 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 748f3245-d5f7-4fbd-a7dc-a3793d3ea475 -t 2000 00:09:46.486 [ 00:09:46.486 { 00:09:46.486 "name": "748f3245-d5f7-4fbd-a7dc-a3793d3ea475", 00:09:46.486 "aliases": [ 00:09:46.486 "lvs/lvol" 00:09:46.486 ], 00:09:46.486 "product_name": "Logical Volume", 00:09:46.486 "block_size": 4096, 00:09:46.486 "num_blocks": 38912, 00:09:46.486 "uuid": "748f3245-d5f7-4fbd-a7dc-a3793d3ea475", 00:09:46.486 "assigned_rate_limits": { 00:09:46.486 "rw_ios_per_sec": 0, 00:09:46.486 "rw_mbytes_per_sec": 0, 00:09:46.486 "r_mbytes_per_sec": 0, 00:09:46.486 "w_mbytes_per_sec": 0 00:09:46.486 }, 00:09:46.486 "claimed": false, 00:09:46.486 "zoned": false, 00:09:46.486 "supported_io_types": { 00:09:46.486 "read": true, 00:09:46.486 "write": true, 00:09:46.486 "unmap": true, 00:09:46.486 "flush": false, 00:09:46.486 "reset": true, 00:09:46.486 "nvme_admin": false, 00:09:46.486 "nvme_io": false, 00:09:46.486 "nvme_io_md": false, 00:09:46.486 "write_zeroes": true, 00:09:46.486 "zcopy": false, 00:09:46.486 "get_zone_info": false, 00:09:46.486 "zone_management": false, 00:09:46.486 "zone_append": false, 00:09:46.486 "compare": false, 00:09:46.486 "compare_and_write": false, 00:09:46.486 "abort": false, 00:09:46.486 "seek_hole": true, 00:09:46.486 "seek_data": true, 00:09:46.486 "copy": false, 00:09:46.486 "nvme_iov_md": false 00:09:46.486 }, 00:09:46.486 "driver_specific": { 00:09:46.486 "lvol": { 00:09:46.486 "lvol_store_uuid": "0706a1a7-a75c-47c9-bcd9-7e342d959d1f", 00:09:46.486 "base_bdev": "aio_bdev", 00:09:46.486 "thin_provision": false, 00:09:46.486 "num_allocated_clusters": 38, 00:09:46.486 "snapshot": false, 00:09:46.486 "clone": false, 00:09:46.486 "esnap_clone": false 00:09:46.486 } 00:09:46.486 } 00:09:46.486 } 00:09:46.486 ] 00:09:46.486 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # return 0 00:09:46.486 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:46.486 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:46.747 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:46.747 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:46.747 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:47.008 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:47.008 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:47.008 [2024-10-07 09:30:46.633036] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:47.008 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:47.008 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # local es=0 00:09:47.008 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:47.008 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@647 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@647 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@647 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@656 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:47.269 request: 00:09:47.269 { 00:09:47.269 "uuid": "0706a1a7-a75c-47c9-bcd9-7e342d959d1f", 00:09:47.269 "method": "bdev_lvol_get_lvstores", 00:09:47.269 "req_id": 1 00:09:47.269 } 00:09:47.269 Got JSON-RPC error response 00:09:47.269 response: 00:09:47.269 { 00:09:47.269 "code": -19, 00:09:47.269 "message": "No such device" 00:09:47.269 } 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@656 -- # es=1 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:09:47.269 09:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:47.529 aio_bdev 00:09:47.529 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 748f3245-d5f7-4fbd-a7dc-a3793d3ea475 00:09:47.529 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_name=748f3245-d5f7-4fbd-a7dc-a3793d3ea475 00:09:47.529 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:09:47.529 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local i 00:09:47.530 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:09:47.530 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:09:47.530 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:47.530 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 748f3245-d5f7-4fbd-a7dc-a3793d3ea475 -t 2000 00:09:47.791 [ 00:09:47.791 { 00:09:47.791 "name": "748f3245-d5f7-4fbd-a7dc-a3793d3ea475", 00:09:47.791 "aliases": [ 00:09:47.791 "lvs/lvol" 00:09:47.791 ], 00:09:47.791 "product_name": "Logical Volume", 00:09:47.791 "block_size": 4096, 00:09:47.791 "num_blocks": 38912, 00:09:47.791 "uuid": "748f3245-d5f7-4fbd-a7dc-a3793d3ea475", 00:09:47.791 "assigned_rate_limits": { 00:09:47.791 "rw_ios_per_sec": 0, 00:09:47.791 "rw_mbytes_per_sec": 0, 00:09:47.791 "r_mbytes_per_sec": 0, 00:09:47.791 "w_mbytes_per_sec": 0 00:09:47.791 }, 00:09:47.791 "claimed": false, 00:09:47.791 "zoned": false, 00:09:47.791 "supported_io_types": { 00:09:47.791 "read": true, 00:09:47.791 "write": true, 00:09:47.791 "unmap": true, 00:09:47.791 "flush": false, 00:09:47.791 "reset": true, 00:09:47.791 "nvme_admin": false, 00:09:47.791 "nvme_io": false, 00:09:47.791 "nvme_io_md": false, 00:09:47.791 "write_zeroes": true, 00:09:47.791 "zcopy": false, 00:09:47.791 "get_zone_info": false, 00:09:47.791 "zone_management": false, 00:09:47.791 "zone_append": false, 00:09:47.791 "compare": false, 00:09:47.791 "compare_and_write": false, 00:09:47.791 "abort": false, 00:09:47.791 "seek_hole": true, 00:09:47.791 "seek_data": true, 00:09:47.791 "copy": false, 00:09:47.791 "nvme_iov_md": false 00:09:47.791 }, 00:09:47.791 "driver_specific": { 00:09:47.792 "lvol": { 00:09:47.792 "lvol_store_uuid": "0706a1a7-a75c-47c9-bcd9-7e342d959d1f", 00:09:47.792 "base_bdev": "aio_bdev", 00:09:47.792 "thin_provision": false, 00:09:47.792 "num_allocated_clusters": 38, 00:09:47.792 "snapshot": false, 00:09:47.792 "clone": false, 00:09:47.792 "esnap_clone": false 00:09:47.792 } 00:09:47.792 } 00:09:47.792 } 00:09:47.792 ] 00:09:47.792 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # return 0 00:09:47.792 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:47.792 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:48.053 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:48.053 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:48.053 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:48.053 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:48.053 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 748f3245-d5f7-4fbd-a7dc-a3793d3ea475 00:09:48.314 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0706a1a7-a75c-47c9-bcd9-7e342d959d1f 00:09:48.575 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:48.575 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:48.575 00:09:48.575 real 0m17.455s 00:09:48.575 user 0m45.643s 00:09:48.575 sys 0m3.149s 00:09:48.575 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # xtrace_disable 00:09:48.575 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:48.575 ************************************ 00:09:48.575 END TEST lvs_grow_dirty 00:09:48.575 ************************************ 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # type=--id 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # id=0 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # '[' --id = --pid ']' 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # shm_files=nvmf_trace.0 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # [[ -z nvmf_trace.0 ]] 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # for n in $shm_files 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:48.842 nvmf_trace.0 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@826 -- # return 0 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.842 rmmod nvme_tcp 00:09:48.842 rmmod nvme_fabrics 00:09:48.842 rmmod nvme_keyring 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3183775 ']' 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3183775 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' -z 3183775 ']' 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # kill -0 3183775 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # uname 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3183775 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3183775' 00:09:48.842 killing process with pid 3183775 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # kill 3183775 00:09:48.842 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@977 -- # wait 3183775 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.104 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.016 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:51.016 00:09:51.016 real 0m45.208s 00:09:51.016 user 1m7.778s 00:09:51.016 sys 0m10.949s 00:09:51.016 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # xtrace_disable 00:09:51.016 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:51.016 ************************************ 00:09:51.016 END TEST nvmf_lvs_grow 00:09:51.016 ************************************ 00:09:51.277 09:30:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:51.277 09:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:09:51.277 09:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:09:51.277 09:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:51.277 ************************************ 00:09:51.277 START TEST nvmf_bdev_io_wait 00:09:51.277 ************************************ 00:09:51.277 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:51.277 * Looking for test storage... 00:09:51.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.277 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:09:51.277 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1626 -- # lcov --version 00:09:51.277 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:09:51.538 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:09:51.538 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.538 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.538 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.538 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.538 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.538 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:09:51.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.539 --rc genhtml_branch_coverage=1 00:09:51.539 --rc genhtml_function_coverage=1 00:09:51.539 --rc genhtml_legend=1 00:09:51.539 --rc geninfo_all_blocks=1 00:09:51.539 --rc geninfo_unexecuted_blocks=1 00:09:51.539 00:09:51.539 ' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:09:51.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.539 --rc genhtml_branch_coverage=1 00:09:51.539 --rc genhtml_function_coverage=1 00:09:51.539 --rc genhtml_legend=1 00:09:51.539 --rc geninfo_all_blocks=1 00:09:51.539 --rc geninfo_unexecuted_blocks=1 00:09:51.539 00:09:51.539 ' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:09:51.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.539 --rc genhtml_branch_coverage=1 00:09:51.539 --rc genhtml_function_coverage=1 00:09:51.539 --rc genhtml_legend=1 00:09:51.539 --rc geninfo_all_blocks=1 00:09:51.539 --rc geninfo_unexecuted_blocks=1 00:09:51.539 00:09:51.539 ' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:09:51.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.539 --rc genhtml_branch_coverage=1 00:09:51.539 --rc genhtml_function_coverage=1 00:09:51.539 --rc genhtml_legend=1 00:09:51.539 --rc geninfo_all_blocks=1 00:09:51.539 --rc geninfo_unexecuted_blocks=1 00:09:51.539 00:09:51.539 ' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.539 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:51.540 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:51.540 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:51.540 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.540 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.540 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.540 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:51.540 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:51.540 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:51.540 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.683 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:59.684 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:59.684 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:59.684 Found net devices under 0000:31:00.0: cvl_0_0 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:59.684 Found net devices under 0000:31:00.1: cvl_0_1 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:09:59.684 00:09:59.684 --- 10.0.0.2 ping statistics --- 00:09:59.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.684 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:09:59.684 00:09:59.684 --- 10.0.0.1 ping statistics --- 00:09:59.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.684 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3188922 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3188922 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # '[' -z 3188922 ']' 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local max_retries=100 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@843 -- # xtrace_disable 00:09:59.684 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.684 [2024-10-07 09:30:58.789175] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:09:59.684 [2024-10-07 09:30:58.789242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.684 [2024-10-07 09:30:58.856229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.685 [2024-10-07 09:30:58.942698] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.685 [2024-10-07 09:30:58.942755] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.685 [2024-10-07 09:30:58.942762] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.685 [2024-10-07 09:30:58.942768] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.685 [2024-10-07 09:30:58.942772] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.685 [2024-10-07 09:30:58.944593] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.685 [2024-10-07 09:30:58.944763] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.685 [2024-10-07 09:30:58.945037] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.685 [2024-10-07 09:30:58.945039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.685 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:09:59.685 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@867 -- # return 0 00:09:59.685 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:59.685 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@733 -- # xtrace_disable 00:09:59.685 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.685 [2024-10-07 09:30:59.099447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.685 Malloc0 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.685 [2024-10-07 09:30:59.177407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3188951 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3188953 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:59.685 { 00:09:59.685 "params": { 00:09:59.685 "name": "Nvme$subsystem", 00:09:59.685 "trtype": "$TEST_TRANSPORT", 00:09:59.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.685 "adrfam": "ipv4", 00:09:59.685 "trsvcid": "$NVMF_PORT", 00:09:59.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.685 "hdgst": ${hdgst:-false}, 00:09:59.685 "ddgst": ${ddgst:-false} 00:09:59.685 }, 00:09:59.685 "method": "bdev_nvme_attach_controller" 00:09:59.685 } 00:09:59.685 EOF 00:09:59.685 )") 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3188955 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:59.685 { 00:09:59.685 "params": { 00:09:59.685 "name": "Nvme$subsystem", 00:09:59.685 "trtype": "$TEST_TRANSPORT", 00:09:59.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.685 "adrfam": "ipv4", 00:09:59.685 "trsvcid": "$NVMF_PORT", 00:09:59.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.685 "hdgst": ${hdgst:-false}, 00:09:59.685 "ddgst": ${ddgst:-false} 00:09:59.685 }, 00:09:59.685 "method": "bdev_nvme_attach_controller" 00:09:59.685 } 00:09:59.685 EOF 00:09:59.685 )") 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3188958 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:59.685 { 00:09:59.685 "params": { 00:09:59.685 "name": "Nvme$subsystem", 00:09:59.685 "trtype": "$TEST_TRANSPORT", 00:09:59.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.685 "adrfam": "ipv4", 00:09:59.685 "trsvcid": "$NVMF_PORT", 00:09:59.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.685 "hdgst": ${hdgst:-false}, 00:09:59.685 "ddgst": ${ddgst:-false} 00:09:59.685 }, 00:09:59.685 "method": "bdev_nvme_attach_controller" 00:09:59.685 } 00:09:59.685 EOF 00:09:59.685 )") 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:59.685 { 00:09:59.685 "params": { 00:09:59.685 "name": "Nvme$subsystem", 00:09:59.685 "trtype": "$TEST_TRANSPORT", 00:09:59.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.685 "adrfam": "ipv4", 00:09:59.685 "trsvcid": "$NVMF_PORT", 00:09:59.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.685 "hdgst": ${hdgst:-false}, 00:09:59.685 "ddgst": ${ddgst:-false} 00:09:59.685 }, 00:09:59.685 "method": "bdev_nvme_attach_controller" 00:09:59.685 } 00:09:59.685 EOF 00:09:59.685 )") 00:09:59.685 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3188951 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:59.686 "params": { 00:09:59.686 "name": "Nvme1", 00:09:59.686 "trtype": "tcp", 00:09:59.686 "traddr": "10.0.0.2", 00:09:59.686 "adrfam": "ipv4", 00:09:59.686 "trsvcid": "4420", 00:09:59.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.686 "hdgst": false, 00:09:59.686 "ddgst": false 00:09:59.686 }, 00:09:59.686 "method": "bdev_nvme_attach_controller" 00:09:59.686 }' 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:59.686 "params": { 00:09:59.686 "name": "Nvme1", 00:09:59.686 "trtype": "tcp", 00:09:59.686 "traddr": "10.0.0.2", 00:09:59.686 "adrfam": "ipv4", 00:09:59.686 "trsvcid": "4420", 00:09:59.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.686 "hdgst": false, 00:09:59.686 "ddgst": false 00:09:59.686 }, 00:09:59.686 "method": "bdev_nvme_attach_controller" 00:09:59.686 }' 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:59.686 "params": { 00:09:59.686 "name": "Nvme1", 00:09:59.686 "trtype": "tcp", 00:09:59.686 "traddr": "10.0.0.2", 00:09:59.686 "adrfam": "ipv4", 00:09:59.686 "trsvcid": "4420", 00:09:59.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.686 "hdgst": false, 00:09:59.686 "ddgst": false 00:09:59.686 }, 00:09:59.686 "method": "bdev_nvme_attach_controller" 00:09:59.686 }' 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:59.686 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:59.686 "params": { 00:09:59.686 "name": "Nvme1", 00:09:59.686 "trtype": "tcp", 00:09:59.686 "traddr": "10.0.0.2", 00:09:59.686 "adrfam": "ipv4", 00:09:59.686 "trsvcid": "4420", 00:09:59.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.686 "hdgst": false, 00:09:59.686 "ddgst": false 00:09:59.686 }, 00:09:59.686 "method": "bdev_nvme_attach_controller" 00:09:59.686 }' 00:09:59.686 [2024-10-07 09:30:59.236450] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:09:59.686 [2024-10-07 09:30:59.236518] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:59.686 [2024-10-07 09:30:59.238426] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:09:59.686 [2024-10-07 09:30:59.238487] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:59.686 [2024-10-07 09:30:59.239393] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:09:59.686 [2024-10-07 09:30:59.239452] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:59.686 [2024-10-07 09:30:59.241154] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:09:59.686 [2024-10-07 09:30:59.241223] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:59.947 [2024-10-07 09:30:59.442710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.947 [2024-10-07 09:30:59.515434] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:09:59.947 [2024-10-07 09:30:59.534528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.947 [2024-10-07 09:30:59.604794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:10:00.208 [2024-10-07 09:30:59.626820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.208 [2024-10-07 09:30:59.697715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.208 [2024-10-07 09:30:59.702437] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:10:00.208 [2024-10-07 09:30:59.765609] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:10:00.469 Running I/O for 1 seconds... 00:10:00.469 Running I/O for 1 seconds... 00:10:00.469 Running I/O for 1 seconds... 00:10:00.729 Running I/O for 1 seconds... 00:10:01.300 7891.00 IOPS, 30.82 MiB/s 00:10:01.300 Latency(us) 00:10:01.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.300 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:01.300 Nvme1n1 : 1.02 7911.42 30.90 0.00 0.00 16074.47 6144.00 29709.65 00:10:01.300 =================================================================================================================== 00:10:01.300 Total : 7911.42 30.90 0.00 0.00 16074.47 6144.00 29709.65 00:10:01.561 7568.00 IOPS, 29.56 MiB/s 00:10:01.561 Latency(us) 00:10:01.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.561 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:01.561 Nvme1n1 : 1.01 7676.04 29.98 0.00 0.00 16625.07 4560.21 36481.71 00:10:01.561 =================================================================================================================== 00:10:01.561 Total : 7676.04 29.98 0.00 0.00 16625.07 4560.21 36481.71 00:10:01.561 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3188953 00:10:01.561 11060.00 IOPS, 43.20 MiB/s 00:10:01.561 Latency(us) 00:10:01.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.561 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:01.561 Nvme1n1 : 1.01 11116.87 43.43 0.00 0.00 11474.73 4669.44 20097.71 00:10:01.561 =================================================================================================================== 00:10:01.561 Total : 11116.87 43.43 0.00 0.00 11474.73 4669.44 20097.71 00:10:01.561 188152.00 IOPS, 734.97 MiB/s 00:10:01.561 Latency(us) 00:10:01.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.561 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:01.561 Nvme1n1 : 1.00 187775.81 733.50 0.00 0.00 678.16 310.61 1979.73 00:10:01.561 =================================================================================================================== 00:10:01.561 Total : 187775.81 733.50 0.00 0.00 678.16 310.61 1979.73 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3188955 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3188958 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.822 rmmod nvme_tcp 00:10:01.822 rmmod nvme_fabrics 00:10:01.822 rmmod nvme_keyring 00:10:01.822 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.823 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:01.823 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:01.823 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3188922 ']' 00:10:01.823 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3188922 00:10:01.823 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' -z 3188922 ']' 00:10:01.823 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # kill -0 3188922 00:10:01.823 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # uname 00:10:01.823 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:10:01.823 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3188922 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3188922' 00:10:02.084 killing process with pid 3188922 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # kill 3188922 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@977 -- # wait 3188922 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.084 09:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.628 00:10:04.628 real 0m13.031s 00:10:04.628 user 0m18.375s 00:10:04.628 sys 0m7.804s 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # xtrace_disable 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.628 ************************************ 00:10:04.628 END TEST nvmf_bdev_io_wait 00:10:04.628 ************************************ 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.628 ************************************ 00:10:04.628 START TEST nvmf_queue_depth 00:10:04.628 ************************************ 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:04.628 * Looking for test storage... 00:10:04.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:10:04.628 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1626 -- # lcov --version 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:10:04.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.628 --rc genhtml_branch_coverage=1 00:10:04.628 --rc genhtml_function_coverage=1 00:10:04.628 --rc genhtml_legend=1 00:10:04.628 --rc geninfo_all_blocks=1 00:10:04.628 --rc geninfo_unexecuted_blocks=1 00:10:04.628 00:10:04.628 ' 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:10:04.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.628 --rc genhtml_branch_coverage=1 00:10:04.628 --rc genhtml_function_coverage=1 00:10:04.628 --rc genhtml_legend=1 00:10:04.628 --rc geninfo_all_blocks=1 00:10:04.628 --rc geninfo_unexecuted_blocks=1 00:10:04.628 00:10:04.628 ' 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:10:04.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.628 --rc genhtml_branch_coverage=1 00:10:04.628 --rc genhtml_function_coverage=1 00:10:04.628 --rc genhtml_legend=1 00:10:04.628 --rc geninfo_all_blocks=1 00:10:04.628 --rc geninfo_unexecuted_blocks=1 00:10:04.628 00:10:04.628 ' 00:10:04.628 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:10:04.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.628 --rc genhtml_branch_coverage=1 00:10:04.628 --rc genhtml_function_coverage=1 00:10:04.628 --rc genhtml_legend=1 00:10:04.628 --rc geninfo_all_blocks=1 00:10:04.628 --rc geninfo_unexecuted_blocks=1 00:10:04.628 00:10:04.628 ' 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.629 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.778 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:12.779 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:12.779 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:12.779 Found net devices under 0000:31:00.0: cvl_0_0 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:12.779 Found net devices under 0000:31:00.1: cvl_0_1 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:10:12.779 00:10:12.779 --- 10.0.0.2 ping statistics --- 00:10:12.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.779 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:10:12.779 00:10:12.779 --- 10.0.0.1 ping statistics --- 00:10:12.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.779 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:12.779 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3193828 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3193828 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # '[' -z 3193828 ']' 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local max_retries=100 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@843 -- # xtrace_disable 00:10:12.780 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.780 [2024-10-07 09:31:11.888404] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:10:12.780 [2024-10-07 09:31:11.888471] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.780 [2024-10-07 09:31:11.964925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.780 [2024-10-07 09:31:12.059411] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.780 [2024-10-07 09:31:12.059471] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.780 [2024-10-07 09:31:12.059480] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.780 [2024-10-07 09:31:12.059487] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.780 [2024-10-07 09:31:12.059493] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.780 [2024-10-07 09:31:12.060274] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.041 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:10:13.041 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@867 -- # return 0 00:10:13.041 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@733 -- # xtrace_disable 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.302 [2024-10-07 09:31:12.753378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.302 Malloc0 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:13.302 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.303 [2024-10-07 09:31:12.822416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3194082 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3194082 /var/tmp/bdevperf.sock 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # '[' -z 3194082 ']' 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local max_retries=100 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:13.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@843 -- # xtrace_disable 00:10:13.303 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.303 [2024-10-07 09:31:12.890518] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:10:13.303 [2024-10-07 09:31:12.890592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194082 ] 00:10:13.564 [2024-10-07 09:31:12.972674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.564 [2024-10-07 09:31:13.069168] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.137 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:10:14.137 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@867 -- # return 0 00:10:14.137 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:14.137 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:14.137 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.398 NVMe0n1 00:10:14.398 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:14.398 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:14.398 Running I/O for 10 seconds... 00:10:24.699 10814.00 IOPS, 42.24 MiB/s 11264.00 IOPS, 44.00 MiB/s 11320.33 IOPS, 44.22 MiB/s 11397.00 IOPS, 44.52 MiB/s 11595.20 IOPS, 45.29 MiB/s 11828.67 IOPS, 46.21 MiB/s 12048.86 IOPS, 47.07 MiB/s 12251.12 IOPS, 47.86 MiB/s 12401.00 IOPS, 48.44 MiB/s 12493.40 IOPS, 48.80 MiB/s 00:10:24.699 Latency(us) 00:10:24.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.699 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:24.699 Verification LBA range: start 0x0 length 0x4000 00:10:24.699 NVMe0n1 : 10.06 12520.95 48.91 0.00 0.00 81521.91 25777.49 65099.09 00:10:24.699 =================================================================================================================== 00:10:24.699 Total : 12520.95 48.91 0.00 0.00 81521.91 25777.49 65099.09 00:10:24.699 { 00:10:24.699 "results": [ 00:10:24.699 { 00:10:24.699 "job": "NVMe0n1", 00:10:24.699 "core_mask": "0x1", 00:10:24.699 "workload": "verify", 00:10:24.699 "status": "finished", 00:10:24.699 "verify_range": { 00:10:24.699 "start": 0, 00:10:24.699 "length": 16384 00:10:24.699 }, 00:10:24.699 "queue_depth": 1024, 00:10:24.699 "io_size": 4096, 00:10:24.699 "runtime": 10.059378, 00:10:24.699 "iops": 12520.953084773233, 00:10:24.699 "mibps": 48.90997298739544, 00:10:24.699 "io_failed": 0, 00:10:24.699 "io_timeout": 0, 00:10:24.699 "avg_latency_us": 81521.911019719, 00:10:24.699 "min_latency_us": 25777.493333333332, 00:10:24.699 "max_latency_us": 65099.09333333333 00:10:24.699 } 00:10:24.699 ], 00:10:24.699 "core_count": 1 00:10:24.700 } 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3194082 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' -z 3194082 ']' 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # kill -0 3194082 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # uname 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3194082 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3194082' 00:10:24.700 killing process with pid 3194082 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # kill 3194082 00:10:24.700 Received shutdown signal, test time was about 10.000000 seconds 00:10:24.700 00:10:24.700 Latency(us) 00:10:24.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.700 =================================================================================================================== 00:10:24.700 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@977 -- # wait 3194082 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.700 rmmod nvme_tcp 00:10:24.700 rmmod nvme_fabrics 00:10:24.700 rmmod nvme_keyring 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3193828 ']' 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3193828 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' -z 3193828 ']' 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # kill -0 3193828 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # uname 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:10:24.700 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3193828 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3193828' 00:10:24.958 killing process with pid 3193828 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # kill 3193828 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@977 -- # wait 3193828 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:24.958 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:10:24.959 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.959 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:24.959 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.959 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.959 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:27.505 00:10:27.505 real 0m22.759s 00:10:27.505 user 0m25.936s 00:10:27.505 sys 0m7.165s 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # xtrace_disable 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:27.505 ************************************ 00:10:27.505 END TEST nvmf_queue_depth 00:10:27.505 ************************************ 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.505 ************************************ 00:10:27.505 START TEST nvmf_target_multipath 00:10:27.505 ************************************ 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:27.505 * Looking for test storage... 00:10:27.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1626 -- # lcov --version 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:10:27.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.505 --rc genhtml_branch_coverage=1 00:10:27.505 --rc genhtml_function_coverage=1 00:10:27.505 --rc genhtml_legend=1 00:10:27.505 --rc geninfo_all_blocks=1 00:10:27.505 --rc geninfo_unexecuted_blocks=1 00:10:27.505 00:10:27.505 ' 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:10:27.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.505 --rc genhtml_branch_coverage=1 00:10:27.505 --rc genhtml_function_coverage=1 00:10:27.505 --rc genhtml_legend=1 00:10:27.505 --rc geninfo_all_blocks=1 00:10:27.505 --rc geninfo_unexecuted_blocks=1 00:10:27.505 00:10:27.505 ' 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:10:27.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.505 --rc genhtml_branch_coverage=1 00:10:27.505 --rc genhtml_function_coverage=1 00:10:27.505 --rc genhtml_legend=1 00:10:27.505 --rc geninfo_all_blocks=1 00:10:27.505 --rc geninfo_unexecuted_blocks=1 00:10:27.505 00:10:27.505 ' 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:10:27.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.505 --rc genhtml_branch_coverage=1 00:10:27.505 --rc genhtml_function_coverage=1 00:10:27.505 --rc genhtml_legend=1 00:10:27.505 --rc geninfo_all_blocks=1 00:10:27.505 --rc geninfo_unexecuted_blocks=1 00:10:27.505 00:10:27.505 ' 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.505 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:27.506 09:31:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:35.654 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:35.654 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:35.654 Found net devices under 0000:31:00.0: cvl_0_0 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:35.654 Found net devices under 0000:31:00.1: cvl_0_1 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.654 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:35.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:10:35.655 00:10:35.655 --- 10.0.0.2 ping statistics --- 00:10:35.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.655 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:10:35.655 00:10:35.655 --- 10.0.0.1 ping statistics --- 00:10:35.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.655 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:35.655 only one NIC for nvmf test 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.655 rmmod nvme_tcp 00:10:35.655 rmmod nvme_fabrics 00:10:35.655 rmmod nvme_keyring 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.655 09:31:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.570 00:10:37.570 real 0m10.244s 00:10:37.570 user 0m2.321s 00:10:37.570 sys 0m5.861s 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # xtrace_disable 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:37.570 ************************************ 00:10:37.570 END TEST nvmf_target_multipath 00:10:37.570 ************************************ 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:10:37.570 09:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.570 ************************************ 00:10:37.570 START TEST nvmf_zcopy 00:10:37.570 ************************************ 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:37.570 * Looking for test storage... 00:10:37.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1626 -- # lcov --version 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.570 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:10:37.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.832 --rc genhtml_branch_coverage=1 00:10:37.832 --rc genhtml_function_coverage=1 00:10:37.832 --rc genhtml_legend=1 00:10:37.832 --rc geninfo_all_blocks=1 00:10:37.832 --rc geninfo_unexecuted_blocks=1 00:10:37.832 00:10:37.832 ' 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:10:37.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.832 --rc genhtml_branch_coverage=1 00:10:37.832 --rc genhtml_function_coverage=1 00:10:37.832 --rc genhtml_legend=1 00:10:37.832 --rc geninfo_all_blocks=1 00:10:37.832 --rc geninfo_unexecuted_blocks=1 00:10:37.832 00:10:37.832 ' 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:10:37.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.832 --rc genhtml_branch_coverage=1 00:10:37.832 --rc genhtml_function_coverage=1 00:10:37.832 --rc genhtml_legend=1 00:10:37.832 --rc geninfo_all_blocks=1 00:10:37.832 --rc geninfo_unexecuted_blocks=1 00:10:37.832 00:10:37.832 ' 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:10:37.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.832 --rc genhtml_branch_coverage=1 00:10:37.832 --rc genhtml_function_coverage=1 00:10:37.832 --rc genhtml_legend=1 00:10:37.832 --rc geninfo_all_blocks=1 00:10:37.832 --rc geninfo_unexecuted_blocks=1 00:10:37.832 00:10:37.832 ' 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.832 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.833 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.977 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:45.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:45.978 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:45.978 Found net devices under 0000:31:00.0: cvl_0_0 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:45.978 Found net devices under 0000:31:00.1: cvl_0_1 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.978 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.978 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.978 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.978 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:10:45.978 00:10:45.978 --- 10.0.0.2 ping statistics --- 00:10:45.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.978 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:10:45.978 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:10:45.978 00:10:45.978 --- 10.0.0.1 ping statistics --- 00:10:45.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.978 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:10:45.978 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.978 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:10:45.978 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:45.978 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3205160 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3205160 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # '[' -z 3205160 ']' 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local max_retries=100 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@843 -- # xtrace_disable 00:10:45.979 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.979 [2024-10-07 09:31:45.144736] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:10:45.979 [2024-10-07 09:31:45.144805] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.979 [2024-10-07 09:31:45.234590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.979 [2024-10-07 09:31:45.326904] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.979 [2024-10-07 09:31:45.326964] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.979 [2024-10-07 09:31:45.326972] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.979 [2024-10-07 09:31:45.326980] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.979 [2024-10-07 09:31:45.326986] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.979 [2024-10-07 09:31:45.327773] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.551 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:10:46.551 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@867 -- # return 0 00:10:46.551 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:46.551 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@733 -- # xtrace_disable 00:10:46.551 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.551 [2024-10-07 09:31:46.012279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.551 [2024-10-07 09:31:46.036539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.551 malloc0 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:46.551 { 00:10:46.551 "params": { 00:10:46.551 "name": "Nvme$subsystem", 00:10:46.551 "trtype": "$TEST_TRANSPORT", 00:10:46.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.551 "adrfam": "ipv4", 00:10:46.551 "trsvcid": "$NVMF_PORT", 00:10:46.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.551 "hdgst": ${hdgst:-false}, 00:10:46.551 "ddgst": ${ddgst:-false} 00:10:46.551 }, 00:10:46.551 "method": "bdev_nvme_attach_controller" 00:10:46.551 } 00:10:46.551 EOF 00:10:46.551 )") 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:46.551 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:46.551 "params": { 00:10:46.551 "name": "Nvme1", 00:10:46.551 "trtype": "tcp", 00:10:46.551 "traddr": "10.0.0.2", 00:10:46.551 "adrfam": "ipv4", 00:10:46.551 "trsvcid": "4420", 00:10:46.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.551 "hdgst": false, 00:10:46.551 "ddgst": false 00:10:46.551 }, 00:10:46.551 "method": "bdev_nvme_attach_controller" 00:10:46.551 }' 00:10:46.551 [2024-10-07 09:31:46.148342] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:10:46.551 [2024-10-07 09:31:46.148416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205272 ] 00:10:46.812 [2024-10-07 09:31:46.231754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.812 [2024-10-07 09:31:46.329579] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.073 Running I/O for 10 seconds... 00:10:57.145 7130.00 IOPS, 55.70 MiB/s 8435.50 IOPS, 65.90 MiB/s 8881.67 IOPS, 69.39 MiB/s 9096.75 IOPS, 71.07 MiB/s 9236.80 IOPS, 72.16 MiB/s 9327.00 IOPS, 72.87 MiB/s 9388.71 IOPS, 73.35 MiB/s 9438.25 IOPS, 73.74 MiB/s 9472.11 IOPS, 74.00 MiB/s 9502.10 IOPS, 74.24 MiB/s 00:10:57.145 Latency(us) 00:10:57.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.145 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:57.145 Verification LBA range: start 0x0 length 0x1000 00:10:57.146 Nvme1n1 : 10.01 9505.65 74.26 0.00 0.00 13419.90 2280.11 28180.48 00:10:57.146 =================================================================================================================== 00:10:57.146 Total : 9505.65 74.26 0.00 0.00 13419.90 2280.11 28180.48 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3207383 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:57.146 { 00:10:57.146 "params": { 00:10:57.146 "name": "Nvme$subsystem", 00:10:57.146 "trtype": "$TEST_TRANSPORT", 00:10:57.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:57.146 "adrfam": "ipv4", 00:10:57.146 "trsvcid": "$NVMF_PORT", 00:10:57.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:57.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:57.146 "hdgst": ${hdgst:-false}, 00:10:57.146 "ddgst": ${ddgst:-false} 00:10:57.146 }, 00:10:57.146 "method": "bdev_nvme_attach_controller" 00:10:57.146 } 00:10:57.146 EOF 00:10:57.146 )") 00:10:57.146 [2024-10-07 09:31:56.694722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.146 [2024-10-07 09:31:56.694752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:57.146 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:57.146 "params": { 00:10:57.146 "name": "Nvme1", 00:10:57.146 "trtype": "tcp", 00:10:57.146 "traddr": "10.0.0.2", 00:10:57.146 "adrfam": "ipv4", 00:10:57.146 "trsvcid": "4420", 00:10:57.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:57.146 "hdgst": false, 00:10:57.146 "ddgst": false 00:10:57.146 }, 00:10:57.146 "method": "bdev_nvme_attach_controller" 00:10:57.146 }' 00:10:57.146 [2024-10-07 09:31:56.706721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.146 [2024-10-07 09:31:56.706732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.146 [2024-10-07 09:31:56.718751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.146 [2024-10-07 09:31:56.718759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.146 [2024-10-07 09:31:56.730781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.146 [2024-10-07 09:31:56.730788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.146 [2024-10-07 09:31:56.739335] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:10:57.146 [2024-10-07 09:31:56.739394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207383 ] 00:10:57.146 [2024-10-07 09:31:56.742809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.146 [2024-10-07 09:31:56.742817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.146 [2024-10-07 09:31:56.754841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.146 [2024-10-07 09:31:56.754850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.146 [2024-10-07 09:31:56.766871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.146 [2024-10-07 09:31:56.766878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.146 [2024-10-07 09:31:56.778902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.146 [2024-10-07 09:31:56.778909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.146 [2024-10-07 09:31:56.790931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.146 [2024-10-07 09:31:56.790939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.146 [2024-10-07 09:31:56.802961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.146 [2024-10-07 09:31:56.802968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.814991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.814999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.822076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.406 [2024-10-07 09:31:56.827023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.827031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.839054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.839063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.851086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.851099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.863118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.863129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.875147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.875157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.876519] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.406 [2024-10-07 09:31:56.887179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.887189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.899216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.899231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.911242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.911252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.923271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.923279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.935302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.935310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.947343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.947359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-10-07 09:31:56.959372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-10-07 09:31:56.959383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.407 [2024-10-07 09:31:56.971400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.407 [2024-10-07 09:31:56.971410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.407 [2024-10-07 09:31:56.983430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.407 [2024-10-07 09:31:56.983438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.407 [2024-10-07 09:31:56.995460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.407 [2024-10-07 09:31:56.995467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.407 [2024-10-07 09:31:57.007492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.407 [2024-10-07 09:31:57.007500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.407 [2024-10-07 09:31:57.019524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.407 [2024-10-07 09:31:57.019533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.407 [2024-10-07 09:31:57.031553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.407 [2024-10-07 09:31:57.031563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.407 [2024-10-07 09:31:57.043592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.407 [2024-10-07 09:31:57.043608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.407 [2024-10-07 09:31:57.055620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.407 [2024-10-07 09:31:57.055629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.407 Running I/O for 5 seconds... 00:10:57.667 [2024-10-07 09:31:57.070760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.667 [2024-10-07 09:31:57.070777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.667 [2024-10-07 09:31:57.084087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.667 [2024-10-07 09:31:57.084104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.667 [2024-10-07 09:31:57.096685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.667 [2024-10-07 09:31:57.096701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.667 [2024-10-07 09:31:57.109062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.667 [2024-10-07 09:31:57.109078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.121225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.121240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.134602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.134622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.148045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.148061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.161576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.161592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.175601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.175620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.188175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.188190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.201155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.201170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.213823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.213838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.227105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.227119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.240634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.240649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.253940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.253955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.267358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.267373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.280644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.280659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.294108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.294122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.307641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.307656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.668 [2024-10-07 09:31:57.321083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.668 [2024-10-07 09:31:57.321097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.333695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.333710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.346634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.346649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.359406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.359421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.372443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.372458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.385373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.385388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.398731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.398746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.412146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.412161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.424715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.424729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.437378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.437392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.450714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.450729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.463917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.463932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.476994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.477008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.490222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.490237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.503760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.503774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.517012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.517027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.530730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.530745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.543931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.543946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.556951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.556966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.568919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.568934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.929 [2024-10-07 09:31:57.582201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.929 [2024-10-07 09:31:57.582216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.190 [2024-10-07 09:31:57.595066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.190 [2024-10-07 09:31:57.595080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.190 [2024-10-07 09:31:57.608351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.190 [2024-10-07 09:31:57.608366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.190 [2024-10-07 09:31:57.621230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.190 [2024-10-07 09:31:57.621248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.190 [2024-10-07 09:31:57.634580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.190 [2024-10-07 09:31:57.634594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.190 [2024-10-07 09:31:57.647138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.190 [2024-10-07 09:31:57.647153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.190 [2024-10-07 09:31:57.660487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.190 [2024-10-07 09:31:57.660502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.190 [2024-10-07 09:31:57.674032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.190 [2024-10-07 09:31:57.674046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.190 [2024-10-07 09:31:57.687425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.190 [2024-10-07 09:31:57.687439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.190 [2024-10-07 09:31:57.700681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.190 [2024-10-07 09:31:57.700695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.190 [2024-10-07 09:31:57.713880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.713894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.191 [2024-10-07 09:31:57.726605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.726622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.191 [2024-10-07 09:31:57.738912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.738927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.191 [2024-10-07 09:31:57.751892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.751906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.191 [2024-10-07 09:31:57.765089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.765104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.191 [2024-10-07 09:31:57.777605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.777623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.191 [2024-10-07 09:31:57.790698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.790712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.191 [2024-10-07 09:31:57.804096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.804111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.191 [2024-10-07 09:31:57.816598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.816613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.191 [2024-10-07 09:31:57.829623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.829637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.191 [2024-10-07 09:31:57.842276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.191 [2024-10-07 09:31:57.842291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.855365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.855380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.868487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.868510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.881660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.881675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.894865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.894880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.908434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.908449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.920910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.920925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.934345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.934360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.947533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.947547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.960932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.960947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.974043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.974057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.987380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.987395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:57.999922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:57.999937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:58.013225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:58.013240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:58.026017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:58.026032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:58.039953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:58.039967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:58.053124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:58.053139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 19210.00 IOPS, 150.08 MiB/s [2024-10-07 09:31:58.066222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:58.066236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:58.078373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:58.078387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:58.091198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:58.091213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.451 [2024-10-07 09:31:58.104799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.451 [2024-10-07 09:31:58.104814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-10-07 09:31:58.118090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-10-07 09:31:58.118109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-10-07 09:31:58.131503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-10-07 09:31:58.131517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-10-07 09:31:58.144596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-10-07 09:31:58.144611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-10-07 09:31:58.157839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-10-07 09:31:58.157855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-10-07 09:31:58.171363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-10-07 09:31:58.171378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-10-07 09:31:58.184998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-10-07 09:31:58.185014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-10-07 09:31:58.198624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-10-07 09:31:58.198639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-10-07 09:31:58.212306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-10-07 09:31:58.212321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.225736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.225752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.238773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.238788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.251334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.251348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.264133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.264149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.277498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.277513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.291063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.291078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.304667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.304682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.317460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.317474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.330371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.330386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.342796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.342811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.356202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.356218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.713 [2024-10-07 09:31:58.369061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.713 [2024-10-07 09:31:58.369076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.381101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.381116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.393898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.393913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.406993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.407008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.420089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.420104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.433456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.433471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.446749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.446764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.460053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.460068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.472703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.472718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.485259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.485273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.498780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.498794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.512378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.512393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.525923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.525937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.539083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.539099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.552202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.552217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.565748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.565763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.579320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.579335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.592580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.592595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.605932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.605947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.619708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.619724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.974 [2024-10-07 09:31:58.632463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.974 [2024-10-07 09:31:58.632478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.236 [2024-10-07 09:31:58.645291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.645306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.658828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.658843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.672278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.672293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.684749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.684764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.697799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.697814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.710739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.710754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.723921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.723936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.737528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.737543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.750943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.750958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.764128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.764145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.777414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.777429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.790305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.790320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.803787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.803802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.816392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.816408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.829199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.829214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.841864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.841879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.855600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.855614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.868249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.868263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.881773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.881788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.237 [2024-10-07 09:31:58.894138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.237 [2024-10-07 09:31:58.894152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:58.906585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:58.906600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:58.919784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:58.919799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:58.933122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:58.933136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:58.945952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:58.945966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:58.959159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:58.959175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:58.972431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:58.972446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:58.985650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:58.985665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:58.999207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:58.999221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.011695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.011710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.024493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.024507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.037158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.037173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.049826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.049841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 19242.00 IOPS, 150.33 MiB/s [2024-10-07 09:31:59.063264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.063279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.076197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.076212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.089277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.089291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.102419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.102438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.115686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.115701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.128598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.128614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.142092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.142107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.497 [2024-10-07 09:31:59.155704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.497 [2024-10-07 09:31:59.155719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.168133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.168148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.180924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.180939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.193481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.193496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.206894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.206909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.220021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.220036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.233350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.233365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.245953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.245967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.258423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.258438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.271871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.271886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.285246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.285260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.298367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.298381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.311666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.311681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.324773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.324788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.338116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.338131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.351460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.351478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.363655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.363671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.375919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.375934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.388314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.388329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.401448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.401463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.758 [2024-10-07 09:31:59.414661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.758 [2024-10-07 09:31:59.414676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.428316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.428331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.441914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.441929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.455210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.455225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.468556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.468570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.481874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.481888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.495347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.495361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.508385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.508400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.521497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.521511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.534416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.534430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.547021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.547037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.559512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.559528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.571942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.571956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.585098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.585112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.598299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.598318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.611853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.611868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.624557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.624571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.637643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.637658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.651093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.651107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.664247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.664261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.019 [2024-10-07 09:31:59.677002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.019 [2024-10-07 09:31:59.677017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.279 [2024-10-07 09:31:59.690498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.279 [2024-10-07 09:31:59.690513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.279 [2024-10-07 09:31:59.703280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.279 [2024-10-07 09:31:59.703295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.279 [2024-10-07 09:31:59.716028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.279 [2024-10-07 09:31:59.716042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.279 [2024-10-07 09:31:59.729747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.279 [2024-10-07 09:31:59.729762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.279 [2024-10-07 09:31:59.742504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.279 [2024-10-07 09:31:59.742518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.755391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.755406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.768283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.768298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.781363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.781377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.794667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.794683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.807599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.807614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.820146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.820161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.833473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.833487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.846476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.846495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.859566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.859580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.873225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.873241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.886185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.886201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.899591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.899606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.912953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.912969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.926535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.926551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.280 [2024-10-07 09:31:59.939783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.280 [2024-10-07 09:31:59.939797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.540 [2024-10-07 09:31:59.952577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.540 [2024-10-07 09:31:59.952592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.540 [2024-10-07 09:31:59.965966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.540 [2024-10-07 09:31:59.965982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.540 [2024-10-07 09:31:59.979575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.540 [2024-10-07 09:31:59.979591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.540 [2024-10-07 09:31:59.992929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.540 [2024-10-07 09:31:59.992944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.540 [2024-10-07 09:32:00.007181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.540 [2024-10-07 09:32:00.007200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.540 [2024-10-07 09:32:00.021767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.540 [2024-10-07 09:32:00.021784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.540 [2024-10-07 09:32:00.035397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.540 [2024-10-07 09:32:00.035413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.540 [2024-10-07 09:32:00.048855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.540 [2024-10-07 09:32:00.048870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.540 [2024-10-07 09:32:00.061405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.540 [2024-10-07 09:32:00.061420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.540 19254.33 IOPS, 150.42 MiB/s [2024-10-07 09:32:00.074204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.541 [2024-10-07 09:32:00.074219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.541 [2024-10-07 09:32:00.086845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.541 [2024-10-07 09:32:00.086861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.541 [2024-10-07 09:32:00.099713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.541 [2024-10-07 09:32:00.099728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.541 [2024-10-07 09:32:00.113191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.541 [2024-10-07 09:32:00.113207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.541 [2024-10-07 09:32:00.126276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.541 [2024-10-07 09:32:00.126293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.541 [2024-10-07 09:32:00.139479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.541 [2024-10-07 09:32:00.139494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.541 [2024-10-07 09:32:00.152314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.541 [2024-10-07 09:32:00.152329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.541 [2024-10-07 09:32:00.165834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.541 [2024-10-07 09:32:00.165849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.541 [2024-10-07 09:32:00.178904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.541 [2024-10-07 09:32:00.178918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.541 [2024-10-07 09:32:00.192510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.541 [2024-10-07 09:32:00.192525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.205357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.205372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.218895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.218911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.232263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.232278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.245797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.245813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.259066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.259081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.272268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.272283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.285024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.285039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.297720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.297735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.310200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.310216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.322758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.322774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.335890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.801 [2024-10-07 09:32:00.335905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.801 [2024-10-07 09:32:00.348094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.802 [2024-10-07 09:32:00.348109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.802 [2024-10-07 09:32:00.361679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.802 [2024-10-07 09:32:00.361695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.802 [2024-10-07 09:32:00.374921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.802 [2024-10-07 09:32:00.374936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.802 [2024-10-07 09:32:00.388201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.802 [2024-10-07 09:32:00.388216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.802 [2024-10-07 09:32:00.401453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.802 [2024-10-07 09:32:00.401467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.802 [2024-10-07 09:32:00.414814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.802 [2024-10-07 09:32:00.414829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.802 [2024-10-07 09:32:00.427852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.802 [2024-10-07 09:32:00.427867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.802 [2024-10-07 09:32:00.441421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.802 [2024-10-07 09:32:00.441435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.802 [2024-10-07 09:32:00.454034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.802 [2024-10-07 09:32:00.454049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.466913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.466928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.479345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.479360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.491953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.491968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.505316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.505332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.518788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.518803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.532138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.532152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.545484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.545499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.558724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.558739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.571116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.571130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.584166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.584184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.596756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.596770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.610546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.610561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.623089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.623103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.636830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.636845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.650490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.650505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.663751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.663766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.676932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.676947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.690665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.690680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.703274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.703289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.063 [2024-10-07 09:32:00.716586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.063 [2024-10-07 09:32:00.716600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.728905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.728920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.741729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.741744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.755305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.755320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.768517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.768532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.781077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.781092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.793764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.793779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.806504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.806518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.819373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.819387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.832439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.832457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.846096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.846110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.859850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.859865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.872970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.872984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.885482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.885497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.898915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.898929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.912170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.912184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.925038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.925052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.938374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.938389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.950955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.950969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.963575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.963590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.324 [2024-10-07 09:32:00.976751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.324 [2024-10-07 09:32:00.976766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:00.989563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:00.989578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.001882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.001897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.015257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.015272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.029054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.029069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.041695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.041710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.054138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.054153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 19258.25 IOPS, 150.46 MiB/s [2024-10-07 09:32:01.066956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.066970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.080450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.080468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.094048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.094063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.107275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.107289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.120792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.120807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.133727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.133742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.147319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.147334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.160712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.160727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.173565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.173579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.185959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.185973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.198895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.198910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.212467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.212481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.225822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.585 [2024-10-07 09:32:01.225837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.585 [2024-10-07 09:32:01.238723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.586 [2024-10-07 09:32:01.238738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.251574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.251589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.264876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.264890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.278117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.278131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.290841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.290855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.304644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.304658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.317343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.317357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.330578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.330593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.343403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.343418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.356118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.356133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.368995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.369010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.382406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.382420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.395569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.395584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.408847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.408861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.422347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.422362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.435453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.435467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.448473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.448487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.461943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.461958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.475288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.475303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.488425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.488439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.846 [2024-10-07 09:32:01.502151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.846 [2024-10-07 09:32:01.502166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.515460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.515476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.528598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.528613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.541846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.541861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.554813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.554827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.567898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.567913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.581009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.581024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.594364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.594378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.608014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.608028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.620522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.620538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.633539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.633553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.647004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.647019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.659882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.659897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.673094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.673110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.686438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.686453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.699526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.699540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.712715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.712731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.726205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.726220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.739886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.739902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.752761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.752776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.108 [2024-10-07 09:32:01.765200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.108 [2024-10-07 09:32:01.765215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.778408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.778423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.791568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.791583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.804570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.804585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.817625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.817641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.830339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.830354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.843671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.843686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.857203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.857219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.869570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.869585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.882788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.882803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.894990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.895005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.908398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.908413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.921393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.921408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.933990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.934005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.947317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.947333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.960704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.960720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.973764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.973779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.986031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.986046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:01.999346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:01.999361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:02.012091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:02.012106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.369 [2024-10-07 09:32:02.025702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.369 [2024-10-07 09:32:02.025717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.039021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.039036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.052334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.052350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.064514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.064529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 19276.00 IOPS, 150.59 MiB/s [2024-10-07 09:32:02.074934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.074948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 00:11:02.630 Latency(us) 00:11:02.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.630 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:02.630 Nvme1n1 : 5.01 19278.90 150.62 0.00 0.00 6633.15 3072.00 14199.47 00:11:02.630 =================================================================================================================== 00:11:02.630 Total : 19278.90 150.62 0.00 0.00 6633.15 3072.00 14199.47 00:11:02.630 [2024-10-07 09:32:02.086462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.086475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.098495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.098508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.110525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.110540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.122554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.122566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.134580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.134591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.146609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.146624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.158649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.158662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.170672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.170682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.182703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.182714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 [2024-10-07 09:32:02.194731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.630 [2024-10-07 09:32:02.194739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3207383) - No such process 00:11:02.630 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3207383 00:11:02.630 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.630 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:02.630 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:02.630 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:02.630 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:02.630 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:02.630 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:02.630 delay0 00:11:02.631 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:02.631 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:02.631 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:02.631 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:02.631 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:02.631 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:02.891 [2024-10-07 09:32:02.395765] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:11.043 Initializing NVMe Controllers 00:11:11.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:11.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:11.044 Initialization complete. Launching workers. 00:11:11.044 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 241, failed: 34557 00:11:11.044 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 34689, failed to submit 109 00:11:11.044 success 34592, unsuccessful 97, failed 0 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:11.044 rmmod nvme_tcp 00:11:11.044 rmmod nvme_fabrics 00:11:11.044 rmmod nvme_keyring 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3205160 ']' 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3205160 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' -z 3205160 ']' 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # kill -0 3205160 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # uname 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3205160 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3205160' 00:11:11.044 killing process with pid 3205160 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # kill 3205160 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@977 -- # wait 3205160 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.044 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.427 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.427 00:11:12.427 real 0m34.823s 00:11:12.427 user 0m45.319s 00:11:12.427 sys 0m12.220s 00:11:12.427 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # xtrace_disable 00:11:12.427 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:12.427 ************************************ 00:11:12.427 END TEST nvmf_zcopy 00:11:12.427 ************************************ 00:11:12.427 09:32:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:12.427 09:32:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:11:12.427 09:32:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:11:12.427 09:32:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:12.427 ************************************ 00:11:12.427 START TEST nvmf_nmic 00:11:12.427 ************************************ 00:11:12.427 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:12.427 * Looking for test storage... 00:11:12.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.427 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:11:12.427 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1626 -- # lcov --version 00:11:12.427 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:11:12.689 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:11:12.689 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.689 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.689 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.689 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.689 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.689 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.689 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.689 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.689 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:11:12.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.690 --rc genhtml_branch_coverage=1 00:11:12.690 --rc genhtml_function_coverage=1 00:11:12.690 --rc genhtml_legend=1 00:11:12.690 --rc geninfo_all_blocks=1 00:11:12.690 --rc geninfo_unexecuted_blocks=1 00:11:12.690 00:11:12.690 ' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:11:12.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.690 --rc genhtml_branch_coverage=1 00:11:12.690 --rc genhtml_function_coverage=1 00:11:12.690 --rc genhtml_legend=1 00:11:12.690 --rc geninfo_all_blocks=1 00:11:12.690 --rc geninfo_unexecuted_blocks=1 00:11:12.690 00:11:12.690 ' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:11:12.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.690 --rc genhtml_branch_coverage=1 00:11:12.690 --rc genhtml_function_coverage=1 00:11:12.690 --rc genhtml_legend=1 00:11:12.690 --rc geninfo_all_blocks=1 00:11:12.690 --rc geninfo_unexecuted_blocks=1 00:11:12.690 00:11:12.690 ' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:11:12.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.690 --rc genhtml_branch_coverage=1 00:11:12.690 --rc genhtml_function_coverage=1 00:11:12.690 --rc genhtml_legend=1 00:11:12.690 --rc geninfo_all_blocks=1 00:11:12.690 --rc geninfo_unexecuted_blocks=1 00:11:12.690 00:11:12.690 ' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.690 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:12.691 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:12.691 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.691 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:20.835 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:20.835 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:20.835 Found net devices under 0000:31:00.0: cvl_0_0 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:20.835 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:20.836 Found net devices under 0000:31:00.1: cvl_0_1 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:20.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:11:20.836 00:11:20.836 --- 10.0.0.2 ping statistics --- 00:11:20.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.836 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:11:20.836 00:11:20.836 --- 10.0.0.1 ping statistics --- 00:11:20.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.836 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3214378 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3214378 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # '[' -z 3214378 ']' 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local max_retries=100 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@843 -- # xtrace_disable 00:11:20.836 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.836 [2024-10-07 09:32:20.051276] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:11:20.836 [2024-10-07 09:32:20.051347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.836 [2024-10-07 09:32:20.143674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.836 [2024-10-07 09:32:20.243152] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.836 [2024-10-07 09:32:20.243220] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.836 [2024-10-07 09:32:20.243229] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.836 [2024-10-07 09:32:20.243236] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.836 [2024-10-07 09:32:20.243242] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.836 [2024-10-07 09:32:20.245364] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.836 [2024-10-07 09:32:20.245526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.836 [2024-10-07 09:32:20.245684] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.836 [2024-10-07 09:32:20.245685] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@867 -- # return 0 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@733 -- # xtrace_disable 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 [2024-10-07 09:32:20.934587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 Malloc0 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:21.409 09:32:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 [2024-10-07 09:32:21.000357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:21.409 test case1: single bdev can't be used in multiple subsystems 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:21.409 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.410 [2024-10-07 09:32:21.036186] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:21.410 [2024-10-07 09:32:21.036215] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:21.410 [2024-10-07 09:32:21.036224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.410 request: 00:11:21.410 { 00:11:21.410 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:21.410 "namespace": { 00:11:21.410 "bdev_name": "Malloc0", 00:11:21.410 "no_auto_visible": false 00:11:21.410 }, 00:11:21.410 "method": "nvmf_subsystem_add_ns", 00:11:21.410 "req_id": 1 00:11:21.410 } 00:11:21.410 Got JSON-RPC error response 00:11:21.410 response: 00:11:21.410 { 00:11:21.410 "code": -32602, 00:11:21.410 "message": "Invalid parameters" 00:11:21.410 } 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:21.410 Adding namespace failed - expected result. 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:21.410 test case2: host connect to nvmf target in multiple paths 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:21.410 [2024-10-07 09:32:21.048417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:11:21.410 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.327 09:32:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:24.714 09:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.714 09:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local i=0 00:11:24.714 09:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.714 09:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:11:24.714 09:32:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # sleep 2 00:11:26.624 09:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:11:26.624 09:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:11:26.624 09:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.624 09:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:11:26.624 09:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.624 09:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # return 0 00:11:26.624 09:32:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:26.624 [global] 00:11:26.624 thread=1 00:11:26.624 invalidate=1 00:11:26.624 rw=write 00:11:26.624 time_based=1 00:11:26.624 runtime=1 00:11:26.624 ioengine=libaio 00:11:26.624 direct=1 00:11:26.624 bs=4096 00:11:26.624 iodepth=1 00:11:26.624 norandommap=0 00:11:26.624 numjobs=1 00:11:26.624 00:11:26.624 verify_dump=1 00:11:26.624 verify_backlog=512 00:11:26.624 verify_state_save=0 00:11:26.624 do_verify=1 00:11:26.624 verify=crc32c-intel 00:11:26.624 [job0] 00:11:26.624 filename=/dev/nvme0n1 00:11:26.624 Could not set queue depth (nvme0n1) 00:11:26.885 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.885 fio-3.35 00:11:26.885 Starting 1 thread 00:11:28.271 00:11:28.271 job0: (groupid=0, jobs=1): err= 0: pid=3215920: Mon Oct 7 09:32:27 2024 00:11:28.271 read: IOPS=15, BW=63.9KiB/s (65.4kB/s)(64.0KiB/1002msec) 00:11:28.271 slat (nsec): min=9619, max=28365, avg=25463.50, stdev=6180.46 00:11:28.271 clat (usec): min=41027, max=42012, avg=41761.16, stdev=353.62 00:11:28.271 lat (usec): min=41055, max=42040, avg=41786.62, stdev=353.15 00:11:28.271 clat percentiles (usec): 00:11:28.271 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:11:28.271 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:28.271 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:28.271 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:28.271 | 99.99th=[42206] 00:11:28.271 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:28.271 slat (nsec): min=3008, max=54790, avg=17161.08, stdev=10426.12 00:11:28.271 clat (usec): min=112, max=910, avg=629.71, stdev=150.70 00:11:28.271 lat (usec): min=116, max=929, avg=646.87, stdev=151.34 00:11:28.271 clat percentiles (usec): 00:11:28.271 | 1.00th=[ 231], 5.00th=[ 359], 10.00th=[ 416], 20.00th=[ 498], 00:11:28.271 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 652], 60.00th=[ 693], 00:11:28.271 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 799], 95.00th=[ 832], 00:11:28.271 | 99.00th=[ 873], 99.50th=[ 873], 99.90th=[ 914], 99.95th=[ 914], 00:11:28.271 | 99.99th=[ 914] 00:11:28.271 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:28.271 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:28.271 lat (usec) : 250=1.14%, 500=19.70%, 750=48.67%, 1000=27.46% 00:11:28.271 lat (msec) : 50=3.03% 00:11:28.271 cpu : usr=0.60%, sys=1.40%, ctx=530, majf=0, minf=1 00:11:28.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.271 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.271 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.271 00:11:28.271 Run status group 0 (all jobs): 00:11:28.271 READ: bw=63.9KiB/s (65.4kB/s), 63.9KiB/s-63.9KiB/s (65.4kB/s-65.4kB/s), io=64.0KiB (65.5kB), run=1002-1002msec 00:11:28.271 WRITE: bw=2044KiB/s (2093kB/s), 2044KiB/s-2044KiB/s (2093kB/s-2093kB/s), io=2048KiB (2097kB), run=1002-1002msec 00:11:28.271 00:11:28.271 Disk stats (read/write): 00:11:28.271 nvme0n1: ios=65/512, merge=0/0, ticks=867/263, in_queue=1130, util=98.80% 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # local i=0 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1234 -- # return 0 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.271 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.271 rmmod nvme_tcp 00:11:28.271 rmmod nvme_fabrics 00:11:28.271 rmmod nvme_keyring 00:11:28.532 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.532 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:28.532 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:28.532 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3214378 ']' 00:11:28.532 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3214378 00:11:28.532 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' -z 3214378 ']' 00:11:28.532 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # kill -0 3214378 00:11:28.532 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # uname 00:11:28.532 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:11:28.532 09:32:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3214378 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3214378' 00:11:28.532 killing process with pid 3214378 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # kill 3214378 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@977 -- # wait 3214378 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.532 09:32:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:31.079 00:11:31.079 real 0m18.320s 00:11:31.079 user 0m47.893s 00:11:31.079 sys 0m6.792s 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # xtrace_disable 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:31.079 ************************************ 00:11:31.079 END TEST nvmf_nmic 00:11:31.079 ************************************ 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.079 ************************************ 00:11:31.079 START TEST nvmf_fio_target 00:11:31.079 ************************************ 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:31.079 * Looking for test storage... 00:11:31.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1626 -- # lcov --version 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:31.079 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:11:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.080 --rc genhtml_branch_coverage=1 00:11:31.080 --rc genhtml_function_coverage=1 00:11:31.080 --rc genhtml_legend=1 00:11:31.080 --rc geninfo_all_blocks=1 00:11:31.080 --rc geninfo_unexecuted_blocks=1 00:11:31.080 00:11:31.080 ' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:11:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.080 --rc genhtml_branch_coverage=1 00:11:31.080 --rc genhtml_function_coverage=1 00:11:31.080 --rc genhtml_legend=1 00:11:31.080 --rc geninfo_all_blocks=1 00:11:31.080 --rc geninfo_unexecuted_blocks=1 00:11:31.080 00:11:31.080 ' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:11:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.080 --rc genhtml_branch_coverage=1 00:11:31.080 --rc genhtml_function_coverage=1 00:11:31.080 --rc genhtml_legend=1 00:11:31.080 --rc geninfo_all_blocks=1 00:11:31.080 --rc geninfo_unexecuted_blocks=1 00:11:31.080 00:11:31.080 ' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:11:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.080 --rc genhtml_branch_coverage=1 00:11:31.080 --rc genhtml_function_coverage=1 00:11:31.080 --rc genhtml_legend=1 00:11:31.080 --rc geninfo_all_blocks=1 00:11:31.080 --rc geninfo_unexecuted_blocks=1 00:11:31.080 00:11:31.080 ' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:31.080 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:31.081 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.081 09:32:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:39.225 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:39.225 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:39.225 09:32:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:39.225 Found net devices under 0000:31:00.0: cvl_0_0 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:39.225 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:39.226 Found net devices under 0000:31:00.1: cvl_0_1 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:11:39.226 00:11:39.226 --- 10.0.0.2 ping statistics --- 00:11:39.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.226 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:11:39.226 00:11:39.226 --- 10.0.0.1 ping statistics --- 00:11:39.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.226 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3220491 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3220491 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # '[' -z 3220491 ']' 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local max_retries=100 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@843 -- # xtrace_disable 00:11:39.226 09:32:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.226 [2024-10-07 09:32:38.426055] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:11:39.226 [2024-10-07 09:32:38.426118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.226 [2024-10-07 09:32:38.517090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.226 [2024-10-07 09:32:38.612498] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.226 [2024-10-07 09:32:38.612562] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.226 [2024-10-07 09:32:38.612572] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.226 [2024-10-07 09:32:38.612580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.226 [2024-10-07 09:32:38.612587] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.226 [2024-10-07 09:32:38.614625] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.226 [2024-10-07 09:32:38.614672] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.226 [2024-10-07 09:32:38.614763] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.226 [2024-10-07 09:32:38.614763] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.802 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:11:39.802 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@867 -- # return 0 00:11:39.802 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:39.802 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@733 -- # xtrace_disable 00:11:39.802 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.802 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.802 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:40.062 [2024-10-07 09:32:39.464997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.062 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:40.062 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:40.062 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:40.323 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:40.323 09:32:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:40.584 09:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:40.584 09:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:40.845 09:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:40.845 09:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:41.106 09:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:41.106 09:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:41.106 09:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:41.367 09:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:41.367 09:32:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:41.628 09:32:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:41.628 09:32:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:41.888 09:32:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.888 09:32:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:41.888 09:32:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.150 09:32:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:42.150 09:32:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.411 09:32:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.411 [2024-10-07 09:32:42.058960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.671 09:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:42.671 09:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:42.932 09:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.845 09:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:44.845 09:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local i=0 00:11:44.845 09:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.845 09:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # [[ -n 4 ]] 00:11:44.845 09:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_device_counter=4 00:11:44.845 09:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # sleep 2 00:11:46.756 09:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:11:46.756 09:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:11:46.756 09:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.756 09:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # nvme_devices=4 00:11:46.756 09:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.756 09:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # return 0 00:11:46.756 09:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:46.756 [global] 00:11:46.756 thread=1 00:11:46.756 invalidate=1 00:11:46.756 rw=write 00:11:46.756 time_based=1 00:11:46.756 runtime=1 00:11:46.756 ioengine=libaio 00:11:46.756 direct=1 00:11:46.756 bs=4096 00:11:46.756 iodepth=1 00:11:46.756 norandommap=0 00:11:46.756 numjobs=1 00:11:46.756 00:11:46.756 verify_dump=1 00:11:46.756 verify_backlog=512 00:11:46.756 verify_state_save=0 00:11:46.756 do_verify=1 00:11:46.756 verify=crc32c-intel 00:11:46.756 [job0] 00:11:46.756 filename=/dev/nvme0n1 00:11:46.756 [job1] 00:11:46.756 filename=/dev/nvme0n2 00:11:46.756 [job2] 00:11:46.756 filename=/dev/nvme0n3 00:11:46.756 [job3] 00:11:46.756 filename=/dev/nvme0n4 00:11:46.756 Could not set queue depth (nvme0n1) 00:11:46.756 Could not set queue depth (nvme0n2) 00:11:46.756 Could not set queue depth (nvme0n3) 00:11:46.756 Could not set queue depth (nvme0n4) 00:11:47.016 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:47.016 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:47.016 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:47.016 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:47.016 fio-3.35 00:11:47.016 Starting 4 threads 00:11:48.409 00:11:48.409 job0: (groupid=0, jobs=1): err= 0: pid=3222272: Mon Oct 7 09:32:47 2024 00:11:48.409 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:48.409 slat (nsec): min=8961, max=58752, avg=27027.46, stdev=2689.08 00:11:48.409 clat (usec): min=603, max=1310, avg=987.99, stdev=117.46 00:11:48.409 lat (usec): min=630, max=1336, avg=1015.02, stdev=117.49 00:11:48.409 clat percentiles (usec): 00:11:48.409 | 1.00th=[ 709], 5.00th=[ 791], 10.00th=[ 832], 20.00th=[ 889], 00:11:48.409 | 30.00th=[ 930], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 1020], 00:11:48.409 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:11:48.409 | 99.00th=[ 1221], 99.50th=[ 1287], 99.90th=[ 1303], 99.95th=[ 1303], 00:11:48.409 | 99.99th=[ 1303] 00:11:48.409 write: IOPS=801, BW=3205KiB/s (3282kB/s)(3208KiB/1001msec); 0 zone resets 00:11:48.409 slat (nsec): min=9848, max=55554, avg=31991.06, stdev=9123.19 00:11:48.409 clat (usec): min=244, max=976, avg=552.51, stdev=127.63 00:11:48.409 lat (usec): min=254, max=1011, avg=584.50, stdev=131.39 00:11:48.409 clat percentiles (usec): 00:11:48.409 | 1.00th=[ 277], 5.00th=[ 343], 10.00th=[ 388], 20.00th=[ 453], 00:11:48.409 | 30.00th=[ 490], 40.00th=[ 515], 50.00th=[ 545], 60.00th=[ 586], 00:11:48.409 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 766], 00:11:48.409 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 979], 99.95th=[ 979], 00:11:48.409 | 99.99th=[ 979] 00:11:48.409 bw ( KiB/s): min= 4096, max= 4096, per=32.43%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.409 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.409 lat (usec) : 250=0.08%, 500=21.23%, 750=36.83%, 1000=24.12% 00:11:48.409 lat (msec) : 2=17.73% 00:11:48.409 cpu : usr=2.30%, sys=3.80%, ctx=1316, majf=0, minf=1 00:11:48.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.409 issued rwts: total=512,802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.409 job1: (groupid=0, jobs=1): err= 0: pid=3222273: Mon Oct 7 09:32:47 2024 00:11:48.409 read: IOPS=638, BW=2553KiB/s (2615kB/s)(2556KiB/1001msec) 00:11:48.409 slat (nsec): min=7046, max=60030, avg=23315.74, stdev=8305.93 00:11:48.409 clat (usec): min=467, max=2453, avg=768.87, stdev=98.50 00:11:48.409 lat (usec): min=494, max=2480, avg=792.19, stdev=100.09 00:11:48.409 clat percentiles (usec): 00:11:48.409 | 1.00th=[ 570], 5.00th=[ 644], 10.00th=[ 668], 20.00th=[ 701], 00:11:48.409 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 775], 60.00th=[ 791], 00:11:48.409 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 873], 00:11:48.409 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 2442], 99.95th=[ 2442], 00:11:48.409 | 99.99th=[ 2442] 00:11:48.409 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:48.409 slat (nsec): min=9831, max=62291, avg=28784.79, stdev=10858.75 00:11:48.409 clat (usec): min=189, max=766, avg=440.63, stdev=88.97 00:11:48.409 lat (usec): min=199, max=777, avg=469.42, stdev=94.42 00:11:48.410 clat percentiles (usec): 00:11:48.410 | 1.00th=[ 243], 5.00th=[ 281], 10.00th=[ 322], 20.00th=[ 359], 00:11:48.410 | 30.00th=[ 404], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 469], 00:11:48.410 | 70.00th=[ 482], 80.00th=[ 506], 90.00th=[ 545], 95.00th=[ 578], 00:11:48.410 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 709], 99.95th=[ 766], 00:11:48.410 | 99.99th=[ 766] 00:11:48.410 bw ( KiB/s): min= 4096, max= 4096, per=32.43%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.410 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.410 lat (usec) : 250=0.78%, 500=47.38%, 750=26.76%, 1000=25.02% 00:11:48.410 lat (msec) : 4=0.06% 00:11:48.410 cpu : usr=2.80%, sys=4.10%, ctx=1664, majf=0, minf=1 00:11:48.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.410 issued rwts: total=639,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.410 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.410 job2: (groupid=0, jobs=1): err= 0: pid=3222274: Mon Oct 7 09:32:47 2024 00:11:48.410 read: IOPS=17, BW=70.7KiB/s (72.4kB/s)(72.0KiB/1018msec) 00:11:48.410 slat (nsec): min=26226, max=27439, avg=26710.39, stdev=316.86 00:11:48.410 clat (usec): min=1064, max=42895, avg=37526.84, stdev=13258.94 00:11:48.410 lat (usec): min=1090, max=42922, avg=37553.55, stdev=13258.88 00:11:48.410 clat percentiles (usec): 00:11:48.410 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[ 1123], 20.00th=[41681], 00:11:48.410 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:48.410 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:11:48.410 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:48.410 | 99.99th=[42730] 00:11:48.410 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:11:48.410 slat (nsec): min=10153, max=56065, avg=32478.79, stdev=8560.97 00:11:48.410 clat (usec): min=272, max=982, avg=625.58, stdev=126.27 00:11:48.410 lat (usec): min=284, max=1017, avg=658.06, stdev=128.78 00:11:48.410 clat percentiles (usec): 00:11:48.410 | 1.00th=[ 318], 5.00th=[ 420], 10.00th=[ 474], 20.00th=[ 523], 00:11:48.410 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 660], 00:11:48.410 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 848], 00:11:48.410 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 979], 99.95th=[ 979], 00:11:48.410 | 99.99th=[ 979] 00:11:48.410 bw ( KiB/s): min= 4096, max= 4096, per=32.43%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.410 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.410 lat (usec) : 500=14.91%, 750=65.85%, 1000=15.85% 00:11:48.410 lat (msec) : 2=0.38%, 50=3.02% 00:11:48.410 cpu : usr=1.18%, sys=1.28%, ctx=531, majf=0, minf=1 00:11:48.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.410 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.410 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.410 job3: (groupid=0, jobs=1): err= 0: pid=3222275: Mon Oct 7 09:32:47 2024 00:11:48.410 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:48.410 slat (nsec): min=7214, max=55916, avg=25625.87, stdev=3665.08 00:11:48.410 clat (usec): min=403, max=1203, avg=914.76, stdev=141.91 00:11:48.410 lat (usec): min=429, max=1229, avg=940.39, stdev=142.19 00:11:48.410 clat percentiles (usec): 00:11:48.410 | 1.00th=[ 486], 5.00th=[ 627], 10.00th=[ 725], 20.00th=[ 799], 00:11:48.410 | 30.00th=[ 857], 40.00th=[ 898], 50.00th=[ 938], 60.00th=[ 971], 00:11:48.410 | 70.00th=[ 1004], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:11:48.410 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1205], 99.95th=[ 1205], 00:11:48.410 | 99.99th=[ 1205] 00:11:48.410 write: IOPS=875, BW=3500KiB/s (3585kB/s)(3504KiB/1001msec); 0 zone resets 00:11:48.410 slat (nsec): min=10145, max=53931, avg=32854.97, stdev=5559.86 00:11:48.410 clat (usec): min=157, max=1010, avg=546.87, stdev=144.21 00:11:48.410 lat (usec): min=191, max=1042, avg=579.73, stdev=145.08 00:11:48.410 clat percentiles (usec): 00:11:48.410 | 1.00th=[ 262], 5.00th=[ 293], 10.00th=[ 351], 20.00th=[ 408], 00:11:48.410 | 30.00th=[ 465], 40.00th=[ 510], 50.00th=[ 545], 60.00th=[ 594], 00:11:48.410 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 775], 00:11:48.410 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 1012], 99.95th=[ 1012], 00:11:48.410 | 99.99th=[ 1012] 00:11:48.410 bw ( KiB/s): min= 4096, max= 4096, per=32.43%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.410 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.410 lat (usec) : 250=0.29%, 500=24.06%, 750=39.05%, 1000=25.36% 00:11:48.410 lat (msec) : 2=11.24% 00:11:48.410 cpu : usr=2.50%, sys=3.90%, ctx=1388, majf=0, minf=2 00:11:48.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.410 issued rwts: total=512,876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.410 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.410 00:11:48.410 Run status group 0 (all jobs): 00:11:48.410 READ: bw=6605KiB/s (6764kB/s), 70.7KiB/s-2553KiB/s (72.4kB/s-2615kB/s), io=6724KiB (6885kB), run=1001-1018msec 00:11:48.410 WRITE: bw=12.3MiB/s (12.9MB/s), 2012KiB/s-4092KiB/s (2060kB/s-4190kB/s), io=12.6MiB (13.2MB), run=1001-1018msec 00:11:48.410 00:11:48.410 Disk stats (read/write): 00:11:48.410 nvme0n1: ios=564/543, merge=0/0, ticks=926/287, in_queue=1213, util=97.49% 00:11:48.410 nvme0n2: ios=535/908, merge=0/0, ticks=1347/389, in_queue=1736, util=97.76% 00:11:48.410 nvme0n3: ios=36/512, merge=0/0, ticks=1426/313, in_queue=1739, util=97.69% 00:11:48.410 nvme0n4: ios=512/624, merge=0/0, ticks=464/306, in_queue=770, util=89.50% 00:11:48.410 09:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:48.410 [global] 00:11:48.410 thread=1 00:11:48.410 invalidate=1 00:11:48.410 rw=randwrite 00:11:48.410 time_based=1 00:11:48.410 runtime=1 00:11:48.410 ioengine=libaio 00:11:48.410 direct=1 00:11:48.410 bs=4096 00:11:48.410 iodepth=1 00:11:48.410 norandommap=0 00:11:48.410 numjobs=1 00:11:48.410 00:11:48.410 verify_dump=1 00:11:48.410 verify_backlog=512 00:11:48.410 verify_state_save=0 00:11:48.410 do_verify=1 00:11:48.410 verify=crc32c-intel 00:11:48.410 [job0] 00:11:48.410 filename=/dev/nvme0n1 00:11:48.410 [job1] 00:11:48.410 filename=/dev/nvme0n2 00:11:48.410 [job2] 00:11:48.410 filename=/dev/nvme0n3 00:11:48.410 [job3] 00:11:48.410 filename=/dev/nvme0n4 00:11:48.410 Could not set queue depth (nvme0n1) 00:11:48.410 Could not set queue depth (nvme0n2) 00:11:48.410 Could not set queue depth (nvme0n3) 00:11:48.410 Could not set queue depth (nvme0n4) 00:11:48.672 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.672 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.672 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.672 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.672 fio-3.35 00:11:48.672 Starting 4 threads 00:11:50.066 00:11:50.066 job0: (groupid=0, jobs=1): err= 0: pid=3222791: Mon Oct 7 09:32:49 2024 00:11:50.066 read: IOPS=16, BW=67.9KiB/s (69.5kB/s)(68.0KiB/1002msec) 00:11:50.066 slat (nsec): min=25560, max=26928, avg=25951.59, stdev=380.52 00:11:50.066 clat (usec): min=1024, max=43036, avg=39530.47, stdev=9935.82 00:11:50.066 lat (usec): min=1051, max=43061, avg=39556.42, stdev=9935.59 00:11:50.066 clat percentiles (usec): 00:11:50.066 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[41157], 20.00th=[41681], 00:11:50.066 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:50.066 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:11:50.066 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:50.066 | 99.99th=[43254] 00:11:50.066 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:50.066 slat (nsec): min=8600, max=73702, avg=29369.84, stdev=8530.10 00:11:50.066 clat (usec): min=151, max=990, avg=605.44, stdev=135.57 00:11:50.066 lat (usec): min=183, max=1022, avg=634.81, stdev=138.27 00:11:50.066 clat percentiles (usec): 00:11:50.066 | 1.00th=[ 281], 5.00th=[ 379], 10.00th=[ 424], 20.00th=[ 494], 00:11:50.066 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:11:50.066 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 775], 95.00th=[ 840], 00:11:50.066 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 988], 99.95th=[ 988], 00:11:50.066 | 99.99th=[ 988] 00:11:50.066 bw ( KiB/s): min= 4096, max= 4096, per=40.08%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.066 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.066 lat (usec) : 250=0.19%, 500=20.04%, 750=63.14%, 1000=13.42% 00:11:50.066 lat (msec) : 2=0.19%, 50=3.02% 00:11:50.066 cpu : usr=1.20%, sys=1.90%, ctx=530, majf=0, minf=1 00:11:50.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.066 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.066 job1: (groupid=0, jobs=1): err= 0: pid=3222792: Mon Oct 7 09:32:49 2024 00:11:50.066 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:50.066 slat (nsec): min=24628, max=40944, avg=25313.40, stdev=1160.52 00:11:50.066 clat (usec): min=675, max=1275, avg=965.01, stdev=90.70 00:11:50.066 lat (usec): min=700, max=1300, avg=990.33, stdev=90.60 00:11:50.066 clat percentiles (usec): 00:11:50.066 | 1.00th=[ 725], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 889], 00:11:50.066 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:11:50.066 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1106], 00:11:50.066 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1270], 99.95th=[ 1270], 00:11:50.066 | 99.99th=[ 1270] 00:11:50.066 write: IOPS=741, BW=2965KiB/s (3036kB/s)(2968KiB/1001msec); 0 zone resets 00:11:50.066 slat (nsec): min=9332, max=63957, avg=28242.92, stdev=8562.09 00:11:50.066 clat (usec): min=290, max=2255, avg=622.77, stdev=127.73 00:11:50.066 lat (usec): min=299, max=2286, avg=651.02, stdev=130.63 00:11:50.066 clat percentiles (usec): 00:11:50.066 | 1.00th=[ 355], 5.00th=[ 408], 10.00th=[ 465], 20.00th=[ 529], 00:11:50.066 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:11:50.066 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 775], 00:11:50.066 | 99.00th=[ 881], 99.50th=[ 1045], 99.90th=[ 2245], 99.95th=[ 2245], 00:11:50.066 | 99.99th=[ 2245] 00:11:50.066 bw ( KiB/s): min= 4096, max= 4096, per=40.08%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.066 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.066 lat (usec) : 500=8.61%, 750=46.17%, 1000=30.22% 00:11:50.066 lat (msec) : 2=14.91%, 4=0.08% 00:11:50.066 cpu : usr=1.30%, sys=4.20%, ctx=1255, majf=0, minf=1 00:11:50.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.066 issued rwts: total=512,742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.066 job2: (groupid=0, jobs=1): err= 0: pid=3222793: Mon Oct 7 09:32:49 2024 00:11:50.066 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:50.066 slat (nsec): min=7127, max=60807, avg=27158.64, stdev=4406.22 00:11:50.066 clat (usec): min=479, max=3606, avg=981.25, stdev=149.98 00:11:50.066 lat (usec): min=507, max=3633, avg=1008.41, stdev=150.53 00:11:50.066 clat percentiles (usec): 00:11:50.066 | 1.00th=[ 734], 5.00th=[ 791], 10.00th=[ 857], 20.00th=[ 906], 00:11:50.066 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 1004], 00:11:50.066 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1123], 00:11:50.067 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 3621], 99.95th=[ 3621], 00:11:50.067 | 99.99th=[ 3621] 00:11:50.067 write: IOPS=757, BW=3029KiB/s (3102kB/s)(3032KiB/1001msec); 0 zone resets 00:11:50.067 slat (nsec): min=9431, max=57122, avg=31881.01, stdev=8699.99 00:11:50.067 clat (usec): min=200, max=981, avg=592.52, stdev=138.94 00:11:50.067 lat (usec): min=211, max=1015, avg=624.40, stdev=142.10 00:11:50.067 clat percentiles (usec): 00:11:50.067 | 1.00th=[ 281], 5.00th=[ 363], 10.00th=[ 400], 20.00th=[ 478], 00:11:50.067 | 30.00th=[ 523], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:11:50.067 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 816], 00:11:50.067 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 979], 00:11:50.067 | 99.99th=[ 979] 00:11:50.067 bw ( KiB/s): min= 4096, max= 4096, per=40.08%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.067 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.067 lat (usec) : 250=0.16%, 500=14.80%, 750=38.27%, 1000=29.37% 00:11:50.067 lat (msec) : 2=17.32%, 4=0.08% 00:11:50.067 cpu : usr=1.80%, sys=6.00%, ctx=1272, majf=0, minf=1 00:11:50.067 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.067 issued rwts: total=512,758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.067 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.067 job3: (groupid=0, jobs=1): err= 0: pid=3222794: Mon Oct 7 09:32:49 2024 00:11:50.067 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:50.067 slat (nsec): min=7197, max=59493, avg=25490.09, stdev=6174.35 00:11:50.067 clat (usec): min=285, max=42452, avg=1226.98, stdev=4412.57 00:11:50.067 lat (usec): min=294, max=42479, avg=1252.47, stdev=4412.79 00:11:50.067 clat percentiles (usec): 00:11:50.067 | 1.00th=[ 424], 5.00th=[ 553], 10.00th=[ 594], 20.00th=[ 652], 00:11:50.067 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 758], 60.00th=[ 807], 00:11:50.067 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 914], 00:11:50.067 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:50.067 | 99.99th=[42206] 00:11:50.067 write: IOPS=547, BW=2190KiB/s (2242kB/s)(2192KiB/1001msec); 0 zone resets 00:11:50.067 slat (nsec): min=10192, max=70181, avg=34301.14, stdev=6160.53 00:11:50.067 clat (usec): min=238, max=1453, avg=604.02, stdev=167.61 00:11:50.067 lat (usec): min=273, max=1488, avg=638.32, stdev=167.88 00:11:50.067 clat percentiles (usec): 00:11:50.067 | 1.00th=[ 281], 5.00th=[ 367], 10.00th=[ 396], 20.00th=[ 457], 00:11:50.067 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 635], 00:11:50.067 | 70.00th=[ 676], 80.00th=[ 750], 90.00th=[ 840], 95.00th=[ 914], 00:11:50.067 | 99.00th=[ 988], 99.50th=[ 1037], 99.90th=[ 1450], 99.95th=[ 1450], 00:11:50.067 | 99.99th=[ 1450] 00:11:50.067 bw ( KiB/s): min= 4096, max= 4096, per=40.08%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.067 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.067 lat (usec) : 250=0.09%, 500=15.00%, 750=50.47%, 1000=33.21% 00:11:50.067 lat (msec) : 2=0.66%, 50=0.57% 00:11:50.067 cpu : usr=1.80%, sys=3.10%, ctx=1061, majf=0, minf=1 00:11:50.067 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.067 issued rwts: total=512,548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.067 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.067 00:11:50.067 Run status group 0 (all jobs): 00:11:50.067 READ: bw=6200KiB/s (6348kB/s), 67.9KiB/s-2046KiB/s (69.5kB/s-2095kB/s), io=6212KiB (6361kB), run=1001-1002msec 00:11:50.067 WRITE: bw=9.98MiB/s (10.5MB/s), 2044KiB/s-3029KiB/s (2093kB/s-3102kB/s), io=10.0MiB (10.5MB), run=1001-1002msec 00:11:50.067 00:11:50.067 Disk stats (read/write): 00:11:50.067 nvme0n1: ios=63/512, merge=0/0, ticks=638/235, in_queue=873, util=91.88% 00:11:50.067 nvme0n2: ios=534/512, merge=0/0, ticks=521/307, in_queue=828, util=86.85% 00:11:50.067 nvme0n3: ios=525/512, merge=0/0, ticks=1376/225, in_queue=1601, util=97.15% 00:11:50.067 nvme0n4: ios=369/512, merge=0/0, ticks=1258/287, in_queue=1545, util=98.83% 00:11:50.067 09:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:50.067 [global] 00:11:50.067 thread=1 00:11:50.067 invalidate=1 00:11:50.067 rw=write 00:11:50.067 time_based=1 00:11:50.067 runtime=1 00:11:50.067 ioengine=libaio 00:11:50.067 direct=1 00:11:50.067 bs=4096 00:11:50.067 iodepth=128 00:11:50.067 norandommap=0 00:11:50.067 numjobs=1 00:11:50.067 00:11:50.067 verify_dump=1 00:11:50.067 verify_backlog=512 00:11:50.067 verify_state_save=0 00:11:50.067 do_verify=1 00:11:50.067 verify=crc32c-intel 00:11:50.067 [job0] 00:11:50.067 filename=/dev/nvme0n1 00:11:50.067 [job1] 00:11:50.067 filename=/dev/nvme0n2 00:11:50.067 [job2] 00:11:50.067 filename=/dev/nvme0n3 00:11:50.067 [job3] 00:11:50.067 filename=/dev/nvme0n4 00:11:50.067 Could not set queue depth (nvme0n1) 00:11:50.067 Could not set queue depth (nvme0n2) 00:11:50.067 Could not set queue depth (nvme0n3) 00:11:50.067 Could not set queue depth (nvme0n4) 00:11:50.327 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:50.327 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:50.327 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:50.327 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:50.327 fio-3.35 00:11:50.327 Starting 4 threads 00:11:51.719 00:11:51.719 job0: (groupid=0, jobs=1): err= 0: pid=3223321: Mon Oct 7 09:32:50 2024 00:11:51.719 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:11:51.719 slat (nsec): min=949, max=7144.5k, avg=61857.17, stdev=445911.00 00:11:51.719 clat (usec): min=2682, max=33343, avg=8007.41, stdev=2908.05 00:11:51.719 lat (usec): min=2687, max=33352, avg=8069.27, stdev=2939.99 00:11:51.719 clat percentiles (usec): 00:11:51.719 | 1.00th=[ 4490], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6390], 00:11:51.719 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7635], 00:11:51.720 | 70.00th=[ 8291], 80.00th=[ 8979], 90.00th=[10945], 95.00th=[11863], 00:11:51.720 | 99.00th=[21365], 99.50th=[29492], 99.90th=[32637], 99.95th=[33424], 00:11:51.720 | 99.99th=[33424] 00:11:51.720 write: IOPS=8268, BW=32.3MiB/s (33.9MB/s)(32.4MiB/1003msec); 0 zone resets 00:11:51.720 slat (nsec): min=1600, max=8661.3k, avg=53959.51, stdev=365507.42 00:11:51.720 clat (usec): min=1129, max=33310, avg=7431.36, stdev=3183.06 00:11:51.720 lat (usec): min=1140, max=33312, avg=7485.32, stdev=3210.96 00:11:51.720 clat percentiles (usec): 00:11:51.720 | 1.00th=[ 2835], 5.00th=[ 3949], 10.00th=[ 4490], 20.00th=[ 5735], 00:11:51.720 | 30.00th=[ 6128], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 6915], 00:11:51.720 | 70.00th=[ 7177], 80.00th=[ 8586], 90.00th=[12649], 95.00th=[14615], 00:11:51.720 | 99.00th=[16909], 99.50th=[21627], 99.90th=[27657], 99.95th=[27657], 00:11:51.720 | 99.99th=[33424] 00:11:51.720 bw ( KiB/s): min=29408, max=36184, per=32.12%, avg=32796.00, stdev=4791.36, samples=2 00:11:51.720 iops : min= 7352, max= 9046, avg=8199.00, stdev=1197.84, samples=2 00:11:51.720 lat (msec) : 2=0.11%, 4=2.93%, 10=82.15%, 20=14.04%, 50=0.77% 00:11:51.720 cpu : usr=5.39%, sys=9.58%, ctx=633, majf=0, minf=1 00:11:51.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:51.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.720 issued rwts: total=8192,8293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.720 job1: (groupid=0, jobs=1): err= 0: pid=3223322: Mon Oct 7 09:32:50 2024 00:11:51.720 read: IOPS=6707, BW=26.2MiB/s (27.5MB/s)(26.3MiB/1002msec) 00:11:51.720 slat (nsec): min=900, max=21011k, avg=67180.14, stdev=510031.11 00:11:51.720 clat (usec): min=968, max=31280, avg=9432.30, stdev=3293.85 00:11:51.720 lat (usec): min=1532, max=31286, avg=9499.48, stdev=3310.40 00:11:51.720 clat percentiles (usec): 00:11:51.720 | 1.00th=[ 3097], 5.00th=[ 5997], 10.00th=[ 6915], 20.00th=[ 7701], 00:11:51.720 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:11:51.720 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[12649], 95.00th=[14877], 00:11:51.720 | 99.00th=[24249], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:11:51.720 | 99.99th=[31327] 00:11:51.720 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:11:51.720 slat (nsec): min=1538, max=13150k, avg=62459.46, stdev=373870.87 00:11:51.720 clat (usec): min=599, max=27038, avg=8849.02, stdev=3728.80 00:11:51.720 lat (usec): min=710, max=27752, avg=8911.48, stdev=3747.62 00:11:51.720 clat percentiles (usec): 00:11:51.720 | 1.00th=[ 1549], 5.00th=[ 3884], 10.00th=[ 5211], 20.00th=[ 6915], 00:11:51.720 | 30.00th=[ 7373], 40.00th=[ 7767], 50.00th=[ 8291], 60.00th=[ 8586], 00:11:51.720 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[14353], 95.00th=[17171], 00:11:51.720 | 99.00th=[22414], 99.50th=[23987], 99.90th=[26870], 99.95th=[27132], 00:11:51.720 | 99.99th=[27132] 00:11:51.720 bw ( KiB/s): min=28176, max=28672, per=27.84%, avg=28424.00, stdev=350.72, samples=2 00:11:51.720 iops : min= 7044, max= 7168, avg=7106.00, stdev=87.68, samples=2 00:11:51.720 lat (usec) : 750=0.04%, 1000=0.12% 00:11:51.720 lat (msec) : 2=0.88%, 4=2.96%, 10=73.48%, 20=20.86%, 50=1.67% 00:11:51.720 cpu : usr=3.90%, sys=6.29%, ctx=746, majf=0, minf=1 00:11:51.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:51.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.720 issued rwts: total=6721,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.720 job2: (groupid=0, jobs=1): err= 0: pid=3223323: Mon Oct 7 09:32:50 2024 00:11:51.720 read: IOPS=3242, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1007msec) 00:11:51.720 slat (nsec): min=916, max=26784k, avg=183650.32, stdev=1455520.76 00:11:51.720 clat (usec): min=979, max=90514, avg=22124.21, stdev=21431.28 00:11:51.720 lat (usec): min=6673, max=90522, avg=22307.86, stdev=21557.77 00:11:51.720 clat percentiles (usec): 00:11:51.720 | 1.00th=[ 7177], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9765], 00:11:51.720 | 30.00th=[10683], 40.00th=[11863], 50.00th=[12911], 60.00th=[14353], 00:11:51.720 | 70.00th=[17433], 80.00th=[23200], 90.00th=[67634], 95.00th=[72877], 00:11:51.720 | 99.00th=[87557], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:11:51.720 | 99.99th=[90702] 00:11:51.720 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:11:51.720 slat (nsec): min=1597, max=16243k, avg=108864.56, stdev=716828.56 00:11:51.720 clat (usec): min=5705, max=83851, avg=15388.25, stdev=11180.03 00:11:51.720 lat (usec): min=5715, max=83864, avg=15497.11, stdev=11238.98 00:11:51.720 clat percentiles (usec): 00:11:51.720 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7898], 00:11:51.720 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:11:51.720 | 70.00th=[14877], 80.00th=[17433], 90.00th=[27919], 95.00th=[38011], 00:11:51.720 | 99.00th=[64750], 99.50th=[72877], 99.90th=[73925], 99.95th=[83362], 00:11:51.720 | 99.99th=[83362] 00:11:51.720 bw ( KiB/s): min=12288, max=16384, per=14.04%, avg=14336.00, stdev=2896.31, samples=2 00:11:51.720 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:51.720 lat (usec) : 1000=0.01% 00:11:51.720 lat (msec) : 10=26.30%, 20=52.53%, 50=13.43%, 100=7.72% 00:11:51.720 cpu : usr=2.29%, sys=2.88%, ctx=378, majf=0, minf=1 00:11:51.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:51.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.720 issued rwts: total=3265,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.720 job3: (groupid=0, jobs=1): err= 0: pid=3223324: Mon Oct 7 09:32:50 2024 00:11:51.720 read: IOPS=6226, BW=24.3MiB/s (25.5MB/s)(24.5MiB/1007msec) 00:11:51.720 slat (nsec): min=961, max=8559.3k, avg=76561.18, stdev=411516.47 00:11:51.720 clat (usec): min=1411, max=22855, avg=9783.73, stdev=2075.37 00:11:51.720 lat (usec): min=3032, max=22858, avg=9860.29, stdev=2078.34 00:11:51.720 clat percentiles (usec): 00:11:51.720 | 1.00th=[ 5735], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 8291], 00:11:51.720 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:11:51.720 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11469], 95.00th=[12125], 00:11:51.720 | 99.00th=[18744], 99.50th=[20579], 99.90th=[21365], 99.95th=[22938], 00:11:51.720 | 99.99th=[22938] 00:11:51.720 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:11:51.720 slat (nsec): min=1640, max=7034.0k, avg=74688.61, stdev=385238.79 00:11:51.720 clat (usec): min=2690, max=22887, avg=9849.89, stdev=3703.47 00:11:51.720 lat (usec): min=2701, max=22896, avg=9924.58, stdev=3718.77 00:11:51.720 clat percentiles (usec): 00:11:51.720 | 1.00th=[ 4113], 5.00th=[ 4948], 10.00th=[ 6456], 20.00th=[ 7570], 00:11:51.720 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9634], 00:11:51.720 | 70.00th=[10028], 80.00th=[11076], 90.00th=[15926], 95.00th=[19268], 00:11:51.720 | 99.00th=[21365], 99.50th=[22414], 99.90th=[22938], 99.95th=[22938], 00:11:51.720 | 99.99th=[22938] 00:11:51.720 bw ( KiB/s): min=26264, max=26968, per=26.07%, avg=26616.00, stdev=497.80, samples=2 00:11:51.720 iops : min= 6566, max= 6742, avg=6654.00, stdev=124.45, samples=2 00:11:51.720 lat (msec) : 2=0.01%, 4=0.67%, 10=60.42%, 20=36.31%, 50=2.58% 00:11:51.720 cpu : usr=3.18%, sys=5.96%, ctx=706, majf=0, minf=1 00:11:51.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:51.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.720 issued rwts: total=6270,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.720 00:11:51.720 Run status group 0 (all jobs): 00:11:51.720 READ: bw=94.8MiB/s (99.4MB/s), 12.7MiB/s-31.9MiB/s (13.3MB/s-33.5MB/s), io=95.5MiB (100MB), run=1002-1007msec 00:11:51.720 WRITE: bw=99.7MiB/s (105MB/s), 13.9MiB/s-32.3MiB/s (14.6MB/s-33.9MB/s), io=100MiB (105MB), run=1002-1007msec 00:11:51.720 00:11:51.720 Disk stats (read/write): 00:11:51.720 nvme0n1: ios=6697/6671, merge=0/0, ticks=51553/48993, in_queue=100546, util=91.78% 00:11:51.720 nvme0n2: ios=5665/5856, merge=0/0, ticks=40435/40362, in_queue=80797, util=86.84% 00:11:51.720 nvme0n3: ios=2825/3072, merge=0/0, ticks=15455/12411, in_queue=27866, util=88.38% 00:11:51.720 nvme0n4: ios=5269/5632, merge=0/0, ticks=22092/23172, in_queue=45264, util=100.00% 00:11:51.720 09:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:51.720 [global] 00:11:51.720 thread=1 00:11:51.720 invalidate=1 00:11:51.720 rw=randwrite 00:11:51.720 time_based=1 00:11:51.720 runtime=1 00:11:51.720 ioengine=libaio 00:11:51.720 direct=1 00:11:51.720 bs=4096 00:11:51.720 iodepth=128 00:11:51.720 norandommap=0 00:11:51.720 numjobs=1 00:11:51.720 00:11:51.720 verify_dump=1 00:11:51.720 verify_backlog=512 00:11:51.720 verify_state_save=0 00:11:51.720 do_verify=1 00:11:51.720 verify=crc32c-intel 00:11:51.720 [job0] 00:11:51.720 filename=/dev/nvme0n1 00:11:51.720 [job1] 00:11:51.720 filename=/dev/nvme0n2 00:11:51.720 [job2] 00:11:51.720 filename=/dev/nvme0n3 00:11:51.720 [job3] 00:11:51.720 filename=/dev/nvme0n4 00:11:51.720 Could not set queue depth (nvme0n1) 00:11:51.720 Could not set queue depth (nvme0n2) 00:11:51.720 Could not set queue depth (nvme0n3) 00:11:51.720 Could not set queue depth (nvme0n4) 00:11:51.980 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:51.980 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:51.980 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:51.980 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:51.980 fio-3.35 00:11:51.980 Starting 4 threads 00:11:53.364 00:11:53.364 job0: (groupid=0, jobs=1): err= 0: pid=3223840: Mon Oct 7 09:32:52 2024 00:11:53.364 read: IOPS=6540, BW=25.5MiB/s (26.8MB/s)(25.7MiB/1007msec) 00:11:53.364 slat (nsec): min=880, max=19861k, avg=67643.86, stdev=624488.95 00:11:53.364 clat (usec): min=825, max=47793, avg=10004.02, stdev=7300.77 00:11:53.364 lat (usec): min=2316, max=47821, avg=10071.67, stdev=7361.93 00:11:53.364 clat percentiles (usec): 00:11:53.364 | 1.00th=[ 3720], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 5997], 00:11:53.364 | 30.00th=[ 6325], 40.00th=[ 6849], 50.00th=[ 7504], 60.00th=[ 8225], 00:11:53.364 | 70.00th=[ 8848], 80.00th=[10159], 90.00th=[21890], 95.00th=[27919], 00:11:53.364 | 99.00th=[38011], 99.50th=[38011], 99.90th=[45351], 99.95th=[45351], 00:11:53.364 | 99.99th=[47973] 00:11:53.364 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:11:53.364 slat (nsec): min=1496, max=24406k, avg=68157.24, stdev=544839.70 00:11:53.364 clat (usec): min=895, max=75224, avg=9300.66, stdev=11546.98 00:11:53.364 lat (usec): min=920, max=75259, avg=9368.82, stdev=11609.78 00:11:53.364 clat percentiles (usec): 00:11:53.364 | 1.00th=[ 1516], 5.00th=[ 2900], 10.00th=[ 3589], 20.00th=[ 4490], 00:11:53.364 | 30.00th=[ 5342], 40.00th=[ 5669], 50.00th=[ 6325], 60.00th=[ 6718], 00:11:53.364 | 70.00th=[ 7308], 80.00th=[ 8586], 90.00th=[15139], 95.00th=[31065], 00:11:53.364 | 99.00th=[71828], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:11:53.364 | 99.99th=[74974] 00:11:53.364 bw ( KiB/s): min=24824, max=28424, per=34.93%, avg=26624.00, stdev=2545.58, samples=2 00:11:53.364 iops : min= 6206, max= 7106, avg=6656.00, stdev=636.40, samples=2 00:11:53.364 lat (usec) : 1000=0.05% 00:11:53.364 lat (msec) : 2=1.13%, 4=7.51%, 10=72.93%, 20=9.18%, 50=7.53% 00:11:53.364 lat (msec) : 100=1.66% 00:11:53.364 cpu : usr=4.97%, sys=6.56%, ctx=496, majf=0, minf=1 00:11:53.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:53.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.365 issued rwts: total=6586,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.365 job1: (groupid=0, jobs=1): err= 0: pid=3223841: Mon Oct 7 09:32:52 2024 00:11:53.365 read: IOPS=4171, BW=16.3MiB/s (17.1MB/s)(17.1MiB/1048msec) 00:11:53.365 slat (nsec): min=973, max=15484k, avg=98571.30, stdev=759848.12 00:11:53.365 clat (usec): min=2751, max=59346, avg=13430.79, stdev=10069.40 00:11:53.365 lat (usec): min=2758, max=63018, avg=13529.36, stdev=10119.22 00:11:53.365 clat percentiles (usec): 00:11:53.365 | 1.00th=[ 3326], 5.00th=[ 5014], 10.00th=[ 5932], 20.00th=[ 6783], 00:11:53.365 | 30.00th=[ 8029], 40.00th=[ 8979], 50.00th=[11207], 60.00th=[12518], 00:11:53.365 | 70.00th=[13435], 80.00th=[17957], 90.00th=[22152], 95.00th=[32375], 00:11:53.365 | 99.00th=[58983], 99.50th=[58983], 99.90th=[59507], 99.95th=[59507], 00:11:53.365 | 99.99th=[59507] 00:11:53.365 write: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1048msec); 0 zone resets 00:11:53.365 slat (nsec): min=1550, max=10162k, avg=110053.51, stdev=577006.76 00:11:53.365 clat (usec): min=901, max=70195, avg=16099.85, stdev=14369.29 00:11:53.365 lat (usec): min=911, max=70204, avg=16209.91, stdev=14463.71 00:11:53.365 clat percentiles (usec): 00:11:53.365 | 1.00th=[ 1516], 5.00th=[ 2933], 10.00th=[ 4359], 20.00th=[ 5211], 00:11:53.365 | 30.00th=[ 6194], 40.00th=[ 7242], 50.00th=[ 8979], 60.00th=[15401], 00:11:53.365 | 70.00th=[21103], 80.00th=[27395], 90.00th=[34866], 95.00th=[46400], 00:11:53.365 | 99.00th=[62653], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:11:53.365 | 99.99th=[69731] 00:11:53.365 bw ( KiB/s): min=13072, max=23792, per=24.18%, avg=18432.00, stdev=7580.18, samples=2 00:11:53.365 iops : min= 3268, max= 5948, avg=4608.00, stdev=1895.05, samples=2 00:11:53.365 lat (usec) : 1000=0.03% 00:11:53.365 lat (msec) : 2=1.09%, 4=4.32%, 10=43.07%, 20=28.92%, 50=18.88% 00:11:53.365 lat (msec) : 100=3.69% 00:11:53.365 cpu : usr=3.44%, sys=4.97%, ctx=406, majf=0, minf=1 00:11:53.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:53.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.365 issued rwts: total=4372,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.365 job2: (groupid=0, jobs=1): err= 0: pid=3223844: Mon Oct 7 09:32:52 2024 00:11:53.365 read: IOPS=3483, BW=13.6MiB/s (14.3MB/s)(14.2MiB/1047msec) 00:11:53.365 slat (nsec): min=934, max=11910k, avg=131073.26, stdev=860718.49 00:11:53.365 clat (usec): min=2805, max=94456, avg=16858.03, stdev=12826.18 00:11:53.365 lat (usec): min=2831, max=94464, avg=16989.10, stdev=12944.02 00:11:53.365 clat percentiles (usec): 00:11:53.365 | 1.00th=[ 3032], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7963], 00:11:53.365 | 30.00th=[ 9765], 40.00th=[11600], 50.00th=[13173], 60.00th=[14353], 00:11:53.365 | 70.00th=[16909], 80.00th=[22676], 90.00th=[35914], 95.00th=[41157], 00:11:53.365 | 99.00th=[74974], 99.50th=[85459], 99.90th=[94897], 99.95th=[94897], 00:11:53.365 | 99.99th=[94897] 00:11:53.365 write: IOPS=3912, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1047msec); 0 zone resets 00:11:53.365 slat (nsec): min=1613, max=13522k, avg=104407.42, stdev=670014.42 00:11:53.365 clat (usec): min=526, max=119820, avg=17444.71, stdev=20515.26 00:11:53.365 lat (usec): min=558, max=119827, avg=17549.11, stdev=20617.89 00:11:53.365 clat percentiles (usec): 00:11:53.365 | 1.00th=[ 1844], 5.00th=[ 3458], 10.00th=[ 4047], 20.00th=[ 5866], 00:11:53.365 | 30.00th=[ 7373], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[ 12256], 00:11:53.365 | 70.00th=[ 17433], 80.00th=[ 21627], 90.00th=[ 39584], 95.00th=[ 63177], 00:11:53.365 | 99.00th=[104334], 99.50th=[112722], 99.90th=[120062], 99.95th=[120062], 00:11:53.365 | 99.99th=[120062] 00:11:53.365 bw ( KiB/s): min=11776, max=20480, per=21.16%, avg=16128.00, stdev=6154.66, samples=2 00:11:53.365 iops : min= 2944, max= 5120, avg=4032.00, stdev=1538.66, samples=2 00:11:53.365 lat (usec) : 750=0.01% 00:11:53.365 lat (msec) : 2=0.67%, 4=4.87%, 10=36.51%, 20=34.79%, 50=18.39% 00:11:53.365 lat (msec) : 100=4.07%, 250=0.68% 00:11:53.365 cpu : usr=3.06%, sys=4.59%, ctx=347, majf=0, minf=2 00:11:53.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:53.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.365 issued rwts: total=3647,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.365 job3: (groupid=0, jobs=1): err= 0: pid=3223845: Mon Oct 7 09:32:52 2024 00:11:53.365 read: IOPS=4344, BW=17.0MiB/s (17.8MB/s)(17.1MiB/1007msec) 00:11:53.365 slat (nsec): min=1005, max=13537k, avg=109865.08, stdev=715188.59 00:11:53.365 clat (usec): min=4438, max=63508, avg=13006.86, stdev=8492.84 00:11:53.365 lat (usec): min=4440, max=63515, avg=13116.73, stdev=8569.46 00:11:53.365 clat percentiles (usec): 00:11:53.365 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7570], 00:11:53.365 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10683], 60.00th=[11863], 00:11:53.365 | 70.00th=[12649], 80.00th=[15139], 90.00th=[21627], 95.00th=[31851], 00:11:53.365 | 99.00th=[51643], 99.50th=[57410], 99.90th=[63701], 99.95th=[63701], 00:11:53.365 | 99.99th=[63701] 00:11:53.365 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:11:53.365 slat (nsec): min=1661, max=8417.0k, avg=106769.84, stdev=528807.40 00:11:53.365 clat (usec): min=2567, max=63498, avg=15309.52, stdev=9974.64 00:11:53.365 lat (usec): min=2571, max=63508, avg=15416.29, stdev=10032.98 00:11:53.365 clat percentiles (usec): 00:11:53.365 | 1.00th=[ 4047], 5.00th=[ 4621], 10.00th=[ 5473], 20.00th=[ 6980], 00:11:53.365 | 30.00th=[ 8225], 40.00th=[ 9765], 50.00th=[11076], 60.00th=[16319], 00:11:53.365 | 70.00th=[18744], 80.00th=[23200], 90.00th=[31065], 95.00th=[34866], 00:11:53.365 | 99.00th=[44303], 99.50th=[52691], 99.90th=[58983], 99.95th=[58983], 00:11:53.365 | 99.99th=[63701] 00:11:53.365 bw ( KiB/s): min=13072, max=23792, per=24.18%, avg=18432.00, stdev=7580.18, samples=2 00:11:53.365 iops : min= 3268, max= 5948, avg=4608.00, stdev=1895.05, samples=2 00:11:53.365 lat (msec) : 4=0.42%, 10=43.75%, 20=36.91%, 50=18.13%, 100=0.78% 00:11:53.365 cpu : usr=2.58%, sys=5.96%, ctx=428, majf=0, minf=1 00:11:53.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:53.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.365 issued rwts: total=4375,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.365 00:11:53.365 Run status group 0 (all jobs): 00:11:53.365 READ: bw=70.7MiB/s (74.2MB/s), 13.6MiB/s-25.5MiB/s (14.3MB/s-26.8MB/s), io=74.1MiB (77.7MB), run=1007-1048msec 00:11:53.365 WRITE: bw=74.4MiB/s (78.0MB/s), 15.3MiB/s-25.8MiB/s (16.0MB/s-27.1MB/s), io=78.0MiB (81.8MB), run=1007-1048msec 00:11:53.365 00:11:53.365 Disk stats (read/write): 00:11:53.365 nvme0n1: ios=5359/5632, merge=0/0, ticks=29680/31616, in_queue=61296, util=79.16% 00:11:53.365 nvme0n2: ios=4371/4608, merge=0/0, ticks=50880/72322, in_queue=123202, util=84.44% 00:11:53.365 nvme0n3: ios=3646/4096, merge=0/0, ticks=36533/40767, in_queue=77300, util=89.37% 00:11:53.365 nvme0n4: ios=3559/3584, merge=0/0, ticks=43199/52184, in_queue=95383, util=97.60% 00:11:53.365 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:53.365 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3224181 00:11:53.365 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:53.365 09:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:53.365 [global] 00:11:53.365 thread=1 00:11:53.365 invalidate=1 00:11:53.365 rw=read 00:11:53.365 time_based=1 00:11:53.365 runtime=10 00:11:53.365 ioengine=libaio 00:11:53.365 direct=1 00:11:53.365 bs=4096 00:11:53.365 iodepth=1 00:11:53.365 norandommap=1 00:11:53.366 numjobs=1 00:11:53.366 00:11:53.366 [job0] 00:11:53.366 filename=/dev/nvme0n1 00:11:53.366 [job1] 00:11:53.366 filename=/dev/nvme0n2 00:11:53.366 [job2] 00:11:53.366 filename=/dev/nvme0n3 00:11:53.366 [job3] 00:11:53.366 filename=/dev/nvme0n4 00:11:53.366 Could not set queue depth (nvme0n1) 00:11:53.366 Could not set queue depth (nvme0n2) 00:11:53.366 Could not set queue depth (nvme0n3) 00:11:53.366 Could not set queue depth (nvme0n4) 00:11:53.636 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:53.636 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:53.636 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:53.636 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:53.636 fio-3.35 00:11:53.636 Starting 4 threads 00:11:56.185 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:56.446 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10305536, buflen=4096 00:11:56.446 fio: pid=3224373, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:56.446 09:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:56.707 09:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:56.707 09:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:56.707 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2392064, buflen=4096 00:11:56.707 fio: pid=3224372, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:56.707 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14798848, buflen=4096 00:11:56.707 fio: pid=3224370, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:56.707 09:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:56.707 09:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:56.968 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12316672, buflen=4096 00:11:56.968 fio: pid=3224371, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:56.968 09:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:56.968 09:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:56.968 00:11:56.968 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3224370: Mon Oct 7 09:32:56 2024 00:11:56.968 read: IOPS=1229, BW=4916KiB/s (5034kB/s)(14.1MiB/2940msec) 00:11:56.968 slat (usec): min=6, max=11058, avg=30.91, stdev=250.92 00:11:56.968 clat (usec): min=198, max=1577, avg=773.80, stdev=170.10 00:11:56.968 lat (usec): min=205, max=12046, avg=804.71, stdev=306.42 00:11:56.968 clat percentiles (usec): 00:11:56.968 | 1.00th=[ 343], 5.00th=[ 490], 10.00th=[ 553], 20.00th=[ 619], 00:11:56.968 | 30.00th=[ 676], 40.00th=[ 725], 50.00th=[ 791], 60.00th=[ 848], 00:11:56.968 | 70.00th=[ 906], 80.00th=[ 938], 90.00th=[ 979], 95.00th=[ 1004], 00:11:56.968 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1123], 99.95th=[ 1205], 00:11:56.968 | 99.99th=[ 1582] 00:11:56.968 bw ( KiB/s): min= 4632, max= 5272, per=40.30%, avg=5004.80, stdev=275.54, samples=5 00:11:56.968 iops : min= 1158, max= 1318, avg=1251.20, stdev=68.89, samples=5 00:11:56.968 lat (usec) : 250=0.22%, 500=5.37%, 750=38.30%, 1000=50.64% 00:11:56.968 lat (msec) : 2=5.45% 00:11:56.968 cpu : usr=1.63%, sys=3.37%, ctx=3618, majf=0, minf=1 00:11:56.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.968 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.968 issued rwts: total=3614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.968 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3224371: Mon Oct 7 09:32:56 2024 00:11:56.968 read: IOPS=960, BW=3842KiB/s (3934kB/s)(11.7MiB/3131msec) 00:11:56.968 slat (usec): min=6, max=14953, avg=46.65, stdev=469.85 00:11:56.968 clat (usec): min=205, max=5299, avg=980.53, stdev=135.56 00:11:56.968 lat (usec): min=212, max=15808, avg=1027.19, stdev=487.71 00:11:56.968 clat percentiles (usec): 00:11:56.968 | 1.00th=[ 717], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 922], 00:11:56.968 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:11:56.968 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:11:56.968 | 99.00th=[ 1156], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 4424], 00:11:56.968 | 99.99th=[ 5276] 00:11:56.968 bw ( KiB/s): min= 3651, max= 3976, per=31.24%, avg=3879.17, stdev=120.12, samples=6 00:11:56.969 iops : min= 912, max= 994, avg=969.67, stdev=30.32, samples=6 00:11:56.969 lat (usec) : 250=0.03%, 500=0.17%, 750=1.76%, 1000=53.99% 00:11:56.969 lat (msec) : 2=43.95%, 10=0.07% 00:11:56.969 cpu : usr=2.04%, sys=3.58%, ctx=3015, majf=0, minf=2 00:11:56.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.969 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.969 issued rwts: total=3008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.969 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3224372: Mon Oct 7 09:32:56 2024 00:11:56.969 read: IOPS=209, BW=838KiB/s (858kB/s)(2336KiB/2789msec) 00:11:56.969 slat (usec): min=7, max=15791, avg=53.71, stdev=651.79 00:11:56.969 clat (usec): min=602, max=43148, avg=4676.73, stdev=11684.82 00:11:56.969 lat (usec): min=636, max=58193, avg=4730.48, stdev=11790.04 00:11:56.969 clat percentiles (usec): 00:11:56.969 | 1.00th=[ 783], 5.00th=[ 881], 10.00th=[ 922], 20.00th=[ 971], 00:11:56.969 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1123], 00:11:56.969 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[ 1319], 95.00th=[42730], 00:11:56.969 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:56.969 | 99.99th=[43254] 00:11:56.969 bw ( KiB/s): min= 88, max= 2552, per=7.44%, avg=924.80, stdev=1169.82, samples=5 00:11:56.969 iops : min= 22, max= 638, avg=231.20, stdev=292.45, samples=5 00:11:56.969 lat (usec) : 750=0.85%, 1000=24.79% 00:11:56.969 lat (msec) : 2=65.47%, 50=8.72% 00:11:56.969 cpu : usr=0.65%, sys=0.57%, ctx=586, majf=0, minf=2 00:11:56.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.969 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.969 issued rwts: total=585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.969 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3224373: Mon Oct 7 09:32:56 2024 00:11:56.969 read: IOPS=968, BW=3874KiB/s (3967kB/s)(9.83MiB/2598msec) 00:11:56.969 slat (nsec): min=6706, max=61399, avg=26526.29, stdev=3301.52 00:11:56.969 clat (usec): min=194, max=1303, avg=991.02, stdev=83.01 00:11:56.969 lat (usec): min=201, max=1329, avg=1017.55, stdev=83.22 00:11:56.969 clat percentiles (usec): 00:11:56.969 | 1.00th=[ 758], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 938], 00:11:56.969 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:11:56.969 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:11:56.969 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1237], 99.95th=[ 1287], 00:11:56.969 | 99.99th=[ 1303] 00:11:56.969 bw ( KiB/s): min= 3856, max= 3968, per=31.53%, avg=3915.20, stdev=42.18, samples=5 00:11:56.969 iops : min= 964, max= 992, avg=978.80, stdev=10.55, samples=5 00:11:56.969 lat (usec) : 250=0.04%, 500=0.08%, 750=0.83%, 1000=48.11% 00:11:56.969 lat (msec) : 2=50.89% 00:11:56.969 cpu : usr=1.46%, sys=4.12%, ctx=2518, majf=0, minf=2 00:11:56.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.969 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.969 issued rwts: total=2517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.969 00:11:56.969 Run status group 0 (all jobs): 00:11:56.969 READ: bw=12.1MiB/s (12.7MB/s), 838KiB/s-4916KiB/s (858kB/s-5034kB/s), io=38.0MiB (39.8MB), run=2598-3131msec 00:11:56.969 00:11:56.969 Disk stats (read/write): 00:11:56.969 nvme0n1: ios=3502/0, merge=0/0, ticks=2600/0, in_queue=2600, util=94.09% 00:11:56.969 nvme0n2: ios=2987/0, merge=0/0, ticks=2765/0, in_queue=2765, util=93.90% 00:11:56.969 nvme0n3: ios=580/0, merge=0/0, ticks=2518/0, in_queue=2518, util=96.03% 00:11:56.969 nvme0n4: ios=2517/0, merge=0/0, ticks=2327/0, in_queue=2327, util=96.31% 00:11:57.230 09:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:57.230 09:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:57.230 09:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:57.230 09:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:57.491 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:57.491 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:57.752 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:57.752 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:57.752 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:57.752 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3224181 00:11:57.752 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:57.752 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.013 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.013 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # local i=0 00:11:58.013 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:11:58.013 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.013 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:11:58.013 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.013 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1234 -- # return 0 00:11:58.014 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:58.014 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:58.014 nvmf hotplug test: fio failed as expected 00:11:58.014 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.275 rmmod nvme_tcp 00:11:58.275 rmmod nvme_fabrics 00:11:58.275 rmmod nvme_keyring 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3220491 ']' 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3220491 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' -z 3220491 ']' 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # kill -0 3220491 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # uname 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3220491 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3220491' 00:11:58.275 killing process with pid 3220491 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # kill 3220491 00:11:58.275 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@977 -- # wait 3220491 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.536 09:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.452 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.452 00:12:00.452 real 0m29.703s 00:12:00.452 user 2m33.612s 00:12:00.452 sys 0m10.109s 00:12:00.452 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # xtrace_disable 00:12:00.452 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.452 ************************************ 00:12:00.452 END TEST nvmf_fio_target 00:12:00.452 ************************************ 00:12:00.452 09:33:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:00.452 09:33:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:12:00.452 09:33:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1110 -- # xtrace_disable 00:12:00.452 09:33:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:00.452 ************************************ 00:12:00.452 START TEST nvmf_bdevio 00:12:00.452 ************************************ 00:12:00.452 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:00.714 * Looking for test storage... 00:12:00.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1626 -- # lcov --version 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:12:00.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.714 --rc genhtml_branch_coverage=1 00:12:00.714 --rc genhtml_function_coverage=1 00:12:00.714 --rc genhtml_legend=1 00:12:00.714 --rc geninfo_all_blocks=1 00:12:00.714 --rc geninfo_unexecuted_blocks=1 00:12:00.714 00:12:00.714 ' 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:12:00.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.714 --rc genhtml_branch_coverage=1 00:12:00.714 --rc genhtml_function_coverage=1 00:12:00.714 --rc genhtml_legend=1 00:12:00.714 --rc geninfo_all_blocks=1 00:12:00.714 --rc geninfo_unexecuted_blocks=1 00:12:00.714 00:12:00.714 ' 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:12:00.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.714 --rc genhtml_branch_coverage=1 00:12:00.714 --rc genhtml_function_coverage=1 00:12:00.714 --rc genhtml_legend=1 00:12:00.714 --rc geninfo_all_blocks=1 00:12:00.714 --rc geninfo_unexecuted_blocks=1 00:12:00.714 00:12:00.714 ' 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:12:00.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.714 --rc genhtml_branch_coverage=1 00:12:00.714 --rc genhtml_function_coverage=1 00:12:00.714 --rc genhtml_legend=1 00:12:00.714 --rc geninfo_all_blocks=1 00:12:00.714 --rc geninfo_unexecuted_blocks=1 00:12:00.714 00:12:00.714 ' 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.714 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.715 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.715 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.715 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.715 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.715 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.715 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.715 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.715 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.715 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:00.976 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:00.977 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.977 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.977 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.977 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:00.977 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:00.977 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.977 09:33:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:09.121 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:09.121 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:09.121 Found net devices under 0000:31:00.0: cvl_0_0 00:12:09.121 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:09.122 Found net devices under 0000:31:00.1: cvl_0_1 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.122 09:33:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:12:09.122 00:12:09.122 --- 10.0.0.2 ping statistics --- 00:12:09.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.122 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:12:09.122 00:12:09.122 --- 10.0.0.1 ping statistics --- 00:12:09.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.122 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3229890 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3229890 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # '[' -z 3229890 ']' 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local max_retries=100 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@843 -- # xtrace_disable 00:12:09.122 09:33:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.122 [2024-10-07 09:33:08.187889] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:12:09.122 [2024-10-07 09:33:08.187956] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.122 [2024-10-07 09:33:08.278285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.122 [2024-10-07 09:33:08.368150] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.122 [2024-10-07 09:33:08.368211] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.122 [2024-10-07 09:33:08.368221] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.122 [2024-10-07 09:33:08.368228] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.122 [2024-10-07 09:33:08.368234] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.122 [2024-10-07 09:33:08.370305] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:12:09.122 [2024-10-07 09:33:08.370465] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:12:09.122 [2024-10-07 09:33:08.370623] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.122 [2024-10-07 09:33:08.370645] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:12:09.383 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:12:09.383 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@867 -- # return 0 00:12:09.383 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:09.383 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@733 -- # xtrace_disable 00:12:09.383 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.644 [2024-10-07 09:33:09.068519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.644 Malloc0 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.644 [2024-10-07 09:33:09.133403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:09.644 { 00:12:09.644 "params": { 00:12:09.644 "name": "Nvme$subsystem", 00:12:09.644 "trtype": "$TEST_TRANSPORT", 00:12:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.644 "adrfam": "ipv4", 00:12:09.644 "trsvcid": "$NVMF_PORT", 00:12:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.644 "hdgst": ${hdgst:-false}, 00:12:09.644 "ddgst": ${ddgst:-false} 00:12:09.644 }, 00:12:09.644 "method": "bdev_nvme_attach_controller" 00:12:09.644 } 00:12:09.644 EOF 00:12:09.644 )") 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:12:09.644 09:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:09.644 "params": { 00:12:09.644 "name": "Nvme1", 00:12:09.644 "trtype": "tcp", 00:12:09.644 "traddr": "10.0.0.2", 00:12:09.644 "adrfam": "ipv4", 00:12:09.644 "trsvcid": "4420", 00:12:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.644 "hdgst": false, 00:12:09.644 "ddgst": false 00:12:09.644 }, 00:12:09.644 "method": "bdev_nvme_attach_controller" 00:12:09.644 }' 00:12:09.644 [2024-10-07 09:33:09.191135] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:12:09.644 [2024-10-07 09:33:09.191200] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230131 ] 00:12:09.644 [2024-10-07 09:33:09.275073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:09.905 [2024-10-07 09:33:09.374271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.905 [2024-10-07 09:33:09.374440] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.905 [2024-10-07 09:33:09.374441] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.166 I/O targets: 00:12:10.166 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:10.166 00:12:10.166 00:12:10.166 CUnit - A unit testing framework for C - Version 2.1-3 00:12:10.166 http://cunit.sourceforge.net/ 00:12:10.166 00:12:10.166 00:12:10.166 Suite: bdevio tests on: Nvme1n1 00:12:10.166 Test: blockdev write read block ...passed 00:12:10.166 Test: blockdev write zeroes read block ...passed 00:12:10.166 Test: blockdev write zeroes read no split ...passed 00:12:10.166 Test: blockdev write zeroes read split ...passed 00:12:10.166 Test: blockdev write zeroes read split partial ...passed 00:12:10.166 Test: blockdev reset ...[2024-10-07 09:33:09.759613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:10.166 [2024-10-07 09:33:09.759726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11941c0 (9): Bad file descriptor 00:12:10.166 [2024-10-07 09:33:09.773756] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:10.166 passed 00:12:10.166 Test: blockdev write read 8 blocks ...passed 00:12:10.166 Test: blockdev write read size > 128k ...passed 00:12:10.166 Test: blockdev write read invalid size ...passed 00:12:10.166 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.166 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.166 Test: blockdev write read max offset ...passed 00:12:10.427 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.427 Test: blockdev writev readv 8 blocks ...passed 00:12:10.427 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.427 Test: blockdev writev readv block ...passed 00:12:10.427 Test: blockdev writev readv size > 128k ...passed 00:12:10.427 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.427 Test: blockdev comparev and writev ...[2024-10-07 09:33:09.999548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.427 [2024-10-07 09:33:09.999597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:10.427 [2024-10-07 09:33:09.999621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.427 [2024-10-07 09:33:09.999631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:10.427 [2024-10-07 09:33:10.000211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.427 [2024-10-07 09:33:10.000227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:10.427 [2024-10-07 09:33:10.000241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.427 [2024-10-07 09:33:10.000251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:10.427 [2024-10-07 09:33:10.000840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.427 [2024-10-07 09:33:10.000854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:10.427 [2024-10-07 09:33:10.000868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.427 [2024-10-07 09:33:10.000877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:10.427 [2024-10-07 09:33:10.001429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.427 [2024-10-07 09:33:10.001444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:10.427 [2024-10-07 09:33:10.001467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.428 [2024-10-07 09:33:10.001480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:10.428 passed 00:12:10.428 Test: blockdev nvme passthru rw ...passed 00:12:10.428 Test: blockdev nvme passthru vendor specific ...[2024-10-07 09:33:10.086259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:10.428 [2024-10-07 09:33:10.086303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:10.428 [2024-10-07 09:33:10.086682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:10.428 [2024-10-07 09:33:10.086699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:10.428 [2024-10-07 09:33:10.086912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:10.428 [2024-10-07 09:33:10.086927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:10.428 [2024-10-07 09:33:10.087166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:10.428 [2024-10-07 09:33:10.087179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:10.428 passed 00:12:10.689 Test: blockdev nvme admin passthru ...passed 00:12:10.689 Test: blockdev copy ...passed 00:12:10.689 00:12:10.689 Run Summary: Type Total Ran Passed Failed Inactive 00:12:10.689 suites 1 1 n/a 0 0 00:12:10.689 tests 23 23 23 0 0 00:12:10.689 asserts 152 152 152 0 n/a 00:12:10.689 00:12:10.689 Elapsed time = 1.119 seconds 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.689 rmmod nvme_tcp 00:12:10.689 rmmod nvme_fabrics 00:12:10.689 rmmod nvme_keyring 00:12:10.689 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.690 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:10.690 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:10.690 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3229890 ']' 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3229890 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' -z 3229890 ']' 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # kill -0 3229890 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # uname 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3229890 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # process_name=reactor_3 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@963 -- # '[' reactor_3 = sudo ']' 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3229890' 00:12:10.951 killing process with pid 3229890 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # kill 3229890 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@977 -- # wait 3229890 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.951 09:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.501 09:33:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:13.501 00:12:13.501 real 0m12.552s 00:12:13.501 user 0m13.112s 00:12:13.501 sys 0m6.517s 00:12:13.501 09:33:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # xtrace_disable 00:12:13.501 09:33:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:13.501 ************************************ 00:12:13.501 END TEST nvmf_bdevio 00:12:13.501 ************************************ 00:12:13.501 09:33:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:13.501 00:12:13.501 real 5m9.164s 00:12:13.501 user 11m49.388s 00:12:13.501 sys 1m55.697s 00:12:13.501 09:33:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # xtrace_disable 00:12:13.501 09:33:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:13.501 ************************************ 00:12:13.501 END TEST nvmf_target_core 00:12:13.501 ************************************ 00:12:13.501 09:33:12 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:13.501 09:33:12 nvmf_tcp -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:12:13.501 09:33:12 nvmf_tcp -- common/autotest_common.sh@1110 -- # xtrace_disable 00:12:13.501 09:33:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.501 ************************************ 00:12:13.501 START TEST nvmf_target_extra 00:12:13.501 ************************************ 00:12:13.501 09:33:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:13.501 * Looking for test storage... 00:12:13.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:13.501 09:33:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:12:13.501 09:33:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1626 -- # lcov --version 00:12:13.501 09:33:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.501 09:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:12:13.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.501 --rc genhtml_branch_coverage=1 00:12:13.501 --rc genhtml_function_coverage=1 00:12:13.501 --rc genhtml_legend=1 00:12:13.501 --rc geninfo_all_blocks=1 00:12:13.501 --rc geninfo_unexecuted_blocks=1 00:12:13.501 00:12:13.501 ' 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:12:13.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.502 --rc genhtml_branch_coverage=1 00:12:13.502 --rc genhtml_function_coverage=1 00:12:13.502 --rc genhtml_legend=1 00:12:13.502 --rc geninfo_all_blocks=1 00:12:13.502 --rc geninfo_unexecuted_blocks=1 00:12:13.502 00:12:13.502 ' 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:12:13.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.502 --rc genhtml_branch_coverage=1 00:12:13.502 --rc genhtml_function_coverage=1 00:12:13.502 --rc genhtml_legend=1 00:12:13.502 --rc geninfo_all_blocks=1 00:12:13.502 --rc geninfo_unexecuted_blocks=1 00:12:13.502 00:12:13.502 ' 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:12:13.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.502 --rc genhtml_branch_coverage=1 00:12:13.502 --rc genhtml_function_coverage=1 00:12:13.502 --rc genhtml_legend=1 00:12:13.502 --rc geninfo_all_blocks=1 00:12:13.502 --rc geninfo_unexecuted_blocks=1 00:12:13.502 00:12:13.502 ' 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:13.502 ************************************ 00:12:13.502 START TEST nvmf_example 00:12:13.502 ************************************ 00:12:13.502 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:13.772 * Looking for test storage... 00:12:13.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.772 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:12:13.772 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1626 -- # lcov --version 00:12:13.772 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:12:13.772 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:12:13.772 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.772 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.772 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.772 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.772 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:12:13.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.773 --rc genhtml_branch_coverage=1 00:12:13.773 --rc genhtml_function_coverage=1 00:12:13.773 --rc genhtml_legend=1 00:12:13.773 --rc geninfo_all_blocks=1 00:12:13.773 --rc geninfo_unexecuted_blocks=1 00:12:13.773 00:12:13.773 ' 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:12:13.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.773 --rc genhtml_branch_coverage=1 00:12:13.773 --rc genhtml_function_coverage=1 00:12:13.773 --rc genhtml_legend=1 00:12:13.773 --rc geninfo_all_blocks=1 00:12:13.773 --rc geninfo_unexecuted_blocks=1 00:12:13.773 00:12:13.773 ' 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:12:13.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.773 --rc genhtml_branch_coverage=1 00:12:13.773 --rc genhtml_function_coverage=1 00:12:13.773 --rc genhtml_legend=1 00:12:13.773 --rc geninfo_all_blocks=1 00:12:13.773 --rc geninfo_unexecuted_blocks=1 00:12:13.773 00:12:13.773 ' 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:12:13.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.773 --rc genhtml_branch_coverage=1 00:12:13.773 --rc genhtml_function_coverage=1 00:12:13.773 --rc genhtml_legend=1 00:12:13.773 --rc geninfo_all_blocks=1 00:12:13.773 --rc geninfo_unexecuted_blocks=1 00:12:13.773 00:12:13.773 ' 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.773 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:13.774 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:22.076 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:22.076 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:22.076 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:22.077 Found net devices under 0000:31:00.0: cvl_0_0 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:22.077 Found net devices under 0000:31:00.1: cvl_0_1 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.077 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:12:22.077 00:12:22.077 --- 10.0.0.2 ping statistics --- 00:12:22.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.077 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:12:22.077 00:12:22.077 --- 10.0.0.1 ping statistics --- 00:12:22.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.077 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3235196 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3235196 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # '[' -z 3235196 ']' 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local max_retries=100 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@843 -- # xtrace_disable 00:12:22.077 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@867 -- # return 0 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@733 -- # xtrace_disable 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:22.650 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:34.884 Initializing NVMe Controllers 00:12:34.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:34.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:34.884 Initialization complete. Launching workers. 00:12:34.884 ======================================================== 00:12:34.884 Latency(us) 00:12:34.884 Device Information : IOPS MiB/s Average min max 00:12:34.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19525.28 76.27 3277.49 641.49 15454.54 00:12:34.884 ======================================================== 00:12:34.884 Total : 19525.28 76.27 3277.49 641.49 15454.54 00:12:34.884 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.884 rmmod nvme_tcp 00:12:34.884 rmmod nvme_fabrics 00:12:34.884 rmmod nvme_keyring 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 3235196 ']' 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 3235196 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' -z 3235196 ']' 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # kill -0 3235196 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # uname 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3235196 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # process_name=nvmf 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@963 -- # '[' nvmf = sudo ']' 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3235196' 00:12:34.884 killing process with pid 3235196 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # kill 3235196 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@977 -- # wait 3235196 00:12:34.884 nvmf threads initialize successfully 00:12:34.884 bdev subsystem init successfully 00:12:34.884 created a nvmf target service 00:12:34.884 create targets's poll groups done 00:12:34.884 all subsystems of target started 00:12:34.884 nvmf target is running 00:12:34.884 all subsystems of target stopped 00:12:34.884 destroy targets's poll groups done 00:12:34.884 destroyed the nvmf target service 00:12:34.884 bdev subsystem finish successfully 00:12:34.884 nvmf threads destroy successfully 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.884 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.885 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.885 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.885 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@733 -- # xtrace_disable 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:35.456 00:12:35.456 real 0m21.774s 00:12:35.456 user 0m47.055s 00:12:35.456 sys 0m7.192s 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # xtrace_disable 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:35.456 ************************************ 00:12:35.456 END TEST nvmf_example 00:12:35.456 ************************************ 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.456 ************************************ 00:12:35.456 START TEST nvmf_filesystem 00:12:35.456 ************************************ 00:12:35.456 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:35.456 * Looking for test storage... 00:12:35.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.456 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:12:35.456 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1626 -- # lcov --version 00:12:35.456 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:12:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.721 --rc genhtml_branch_coverage=1 00:12:35.721 --rc genhtml_function_coverage=1 00:12:35.721 --rc genhtml_legend=1 00:12:35.721 --rc geninfo_all_blocks=1 00:12:35.721 --rc geninfo_unexecuted_blocks=1 00:12:35.721 00:12:35.721 ' 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:12:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.721 --rc genhtml_branch_coverage=1 00:12:35.721 --rc genhtml_function_coverage=1 00:12:35.721 --rc genhtml_legend=1 00:12:35.721 --rc geninfo_all_blocks=1 00:12:35.721 --rc geninfo_unexecuted_blocks=1 00:12:35.721 00:12:35.721 ' 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:12:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.721 --rc genhtml_branch_coverage=1 00:12:35.721 --rc genhtml_function_coverage=1 00:12:35.721 --rc genhtml_legend=1 00:12:35.721 --rc geninfo_all_blocks=1 00:12:35.721 --rc geninfo_unexecuted_blocks=1 00:12:35.721 00:12:35.721 ' 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:12:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.721 --rc genhtml_branch_coverage=1 00:12:35.721 --rc genhtml_function_coverage=1 00:12:35.721 --rc genhtml_legend=1 00:12:35.721 --rc geninfo_all_blocks=1 00:12:35.721 --rc geninfo_unexecuted_blocks=1 00:12:35.721 00:12:35.721 ' 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:35.721 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:12:35.722 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:35.723 #define SPDK_CONFIG_H 00:12:35.723 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:35.723 #define SPDK_CONFIG_APPS 1 00:12:35.723 #define SPDK_CONFIG_ARCH native 00:12:35.723 #undef SPDK_CONFIG_ASAN 00:12:35.723 #undef SPDK_CONFIG_AVAHI 00:12:35.723 #undef SPDK_CONFIG_CET 00:12:35.723 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:35.723 #define SPDK_CONFIG_COVERAGE 1 00:12:35.723 #define SPDK_CONFIG_CROSS_PREFIX 00:12:35.723 #undef SPDK_CONFIG_CRYPTO 00:12:35.723 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:35.723 #undef SPDK_CONFIG_CUSTOMOCF 00:12:35.723 #undef SPDK_CONFIG_DAOS 00:12:35.723 #define SPDK_CONFIG_DAOS_DIR 00:12:35.723 #define SPDK_CONFIG_DEBUG 1 00:12:35.723 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:35.723 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:35.723 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:35.723 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:35.723 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:35.723 #undef SPDK_CONFIG_DPDK_UADK 00:12:35.723 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:35.723 #define SPDK_CONFIG_EXAMPLES 1 00:12:35.723 #undef SPDK_CONFIG_FC 00:12:35.723 #define SPDK_CONFIG_FC_PATH 00:12:35.723 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:35.723 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:35.723 #define SPDK_CONFIG_FSDEV 1 00:12:35.723 #undef SPDK_CONFIG_FUSE 00:12:35.723 #undef SPDK_CONFIG_FUZZER 00:12:35.723 #define SPDK_CONFIG_FUZZER_LIB 00:12:35.723 #undef SPDK_CONFIG_GOLANG 00:12:35.723 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:35.723 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:35.723 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:35.723 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:35.723 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:35.723 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:35.723 #undef SPDK_CONFIG_HAVE_LZ4 00:12:35.723 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:35.723 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:35.723 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:35.723 #define SPDK_CONFIG_IDXD 1 00:12:35.723 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:35.723 #undef SPDK_CONFIG_IPSEC_MB 00:12:35.723 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:35.723 #define SPDK_CONFIG_ISAL 1 00:12:35.723 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:35.723 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:35.723 #define SPDK_CONFIG_LIBDIR 00:12:35.723 #undef SPDK_CONFIG_LTO 00:12:35.723 #define SPDK_CONFIG_MAX_LCORES 128 00:12:35.723 #define SPDK_CONFIG_NVME_CUSE 1 00:12:35.723 #undef SPDK_CONFIG_OCF 00:12:35.723 #define SPDK_CONFIG_OCF_PATH 00:12:35.723 #define SPDK_CONFIG_OPENSSL_PATH 00:12:35.723 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:35.723 #define SPDK_CONFIG_PGO_DIR 00:12:35.723 #undef SPDK_CONFIG_PGO_USE 00:12:35.723 #define SPDK_CONFIG_PREFIX /usr/local 00:12:35.723 #undef SPDK_CONFIG_RAID5F 00:12:35.723 #undef SPDK_CONFIG_RBD 00:12:35.723 #define SPDK_CONFIG_RDMA 1 00:12:35.723 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:35.723 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:35.723 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:35.723 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:35.723 #define SPDK_CONFIG_SHARED 1 00:12:35.723 #undef SPDK_CONFIG_SMA 00:12:35.723 #define SPDK_CONFIG_TESTS 1 00:12:35.723 #undef SPDK_CONFIG_TSAN 00:12:35.723 #define SPDK_CONFIG_UBLK 1 00:12:35.723 #define SPDK_CONFIG_UBSAN 1 00:12:35.723 #undef SPDK_CONFIG_UNIT_TESTS 00:12:35.723 #undef SPDK_CONFIG_URING 00:12:35.723 #define SPDK_CONFIG_URING_PATH 00:12:35.723 #undef SPDK_CONFIG_URING_ZNS 00:12:35.723 #undef SPDK_CONFIG_USDT 00:12:35.723 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:35.723 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:35.723 #define SPDK_CONFIG_VFIO_USER 1 00:12:35.723 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:35.723 #define SPDK_CONFIG_VHOST 1 00:12:35.723 #define SPDK_CONFIG_VIRTIO 1 00:12:35.723 #undef SPDK_CONFIG_VTUNE 00:12:35.723 #define SPDK_CONFIG_VTUNE_DIR 00:12:35.723 #define SPDK_CONFIG_WERROR 1 00:12:35.723 #define SPDK_CONFIG_WPDK_DIR 00:12:35.723 #undef SPDK_CONFIG_XNVME 00:12:35.723 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.723 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/nvme/functions.sh 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/nvme/functions.sh 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/nvme/../../../ 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/sync/functions.sh 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- sync/functions.sh@7 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/sync/functions.sh 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- sync/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/sync/../../../ 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- sync/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@11 -- # ctrls_g=() 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@11 -- # declare -A ctrls_g 00:12:35.724 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@12 -- # nvmes_g=() 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@12 -- # declare -A nvmes_g 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@13 -- # bdfs_g=() 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@13 -- # declare -A bdfs_g 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@14 -- # ordered_ctrls_g=() 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@14 -- # declare -a ordered_ctrls_g 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvme/functions.sh@16 -- # nvme_name= 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/sync/functions.sh 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- sync/functions.sh@7 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/sync/functions.sh 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- sync/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/sync/../../../ 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- sync/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # export RUN_NIGHTLY 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_RUN_VALGRIND 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 1 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_UNITTEST 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_AUTOBUILD 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_RELEASE_BUILD 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISAL 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_ISCSI 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_PMR 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_BP 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 1 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_CLI 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVME_CUSE 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_NVME_FDP 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 1 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_NVMF 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 1 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_VFIOUSER 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_FUZZER 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_FUZZER_SHORT 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # : tcp 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_RBD 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_VHOST 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOCKDEV 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_RAID 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_IOAT 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_BLOBFS 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_TEST_VHOST_INIT 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_TEST_LVOL 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_ASAN 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 1 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_RUN_UBSAN 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_RUN_NON_ROOT 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_CRYPTO 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_FTL 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_OCF 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_TEST_VMD 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_OPAL 00:12:35.725 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_NATIVE_DPDK 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # : true 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_AUTOTEST_X 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_URING 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_USDT 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_USE_IGB_UIO 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_SCHEDULER 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SCANBUILD 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # : e810 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_NVMF_NICS 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_SMA 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_DAOS 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_XNVME 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # export SPDK_TEST_ACCEL 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_ACCEL_DSA 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_ACCEL_IAA 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # : 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # export SPDK_TEST_FUZZER_TARGET 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_TEST_NVMF_MDNS 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # : 0 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_TEST_SETUP 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@188 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@192 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@192 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:35.726 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # PYTHONDONTWRITEBYTECODE=1 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # rm -rf /var/tmp/asan_suppression_file 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@207 -- # cat 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@243 -- # echo leak:libfuse3.so 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # '[' -z /var/spdk/dependencies ']' 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export DEPENDENCY_DIR 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_MAIN=0 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV_LLVM=1 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _LCOV= 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ '' == *clang* ]] 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # _lcov_opt[_LCOV_MAIN]= 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # lcov_opt= 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # '[' 0 -eq 0 ']' 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # export valgrind= 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # valgrind= 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # uname -s 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # '[' Linux = Linux ']' 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # HUGEMEM=4096 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # export CLEAR_HUGE=yes 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # CLEAR_HUGE=yes 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKE=make 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@291 -- # MAKEFLAGS=-j144 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # export HUGEMEM=4096 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # HUGEMEM=4096 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # NO_HUGE=() 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # TEST_MODE= 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # for i in "$@" 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@312 -- # case "$i" in 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@317 -- # TEST_TRANSPORT=tcp 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # [[ -z 3238004 ]] 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # kill -0 3238004 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1611 -- # set_test_storage 2147483648 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # [[ -v testdir ]] 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local requested_size=2147483648 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local mount target_dir 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local -A mounts fss sizes avails uses 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@348 -- # local source fs size avail mount use 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # local storage_fallback storage_candidates 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # mktemp -udt spdk.XXXXXX 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # storage_fallback=/tmp/spdk.t7srQq 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@359 -- # [[ -n '' ]] 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # [[ -n '' ]] 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.t7srQq/tests/target /tmp/spdk.t7srQq 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # requested_size=2214592512 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # read -r source fs size use avail _ mount 00:12:35.727 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # df -T 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # grep -v Filesystem 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # mounts["$mount"]=spdk_devtmpfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # fss["$mount"]=devtmpfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # avails["$mount"]=67108864 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # sizes["$mount"]=67108864 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # uses["$mount"]=0 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # read -r source fs size use avail _ mount 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # mounts["$mount"]=/dev/pmem0 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # fss["$mount"]=ext2 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # avails["$mount"]=682680320 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # sizes["$mount"]=5284429824 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # uses["$mount"]=4601749504 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # read -r source fs size use avail _ mount 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # mounts["$mount"]=spdk_root 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # fss["$mount"]=overlay 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # avails["$mount"]=123294785536 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # sizes["$mount"]=129356517376 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # uses["$mount"]=6061731840 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # read -r source fs size use avail _ mount 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # mounts["$mount"]=tmpfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # fss["$mount"]=tmpfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # avails["$mount"]=64668225536 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # sizes["$mount"]=64678256640 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # uses["$mount"]=10031104 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # read -r source fs size use avail _ mount 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # mounts["$mount"]=tmpfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # fss["$mount"]=tmpfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # avails["$mount"]=25847910400 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # sizes["$mount"]=25871306752 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # uses["$mount"]=23396352 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # read -r source fs size use avail _ mount 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # mounts["$mount"]=efivarfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # fss["$mount"]=efivarfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # avails["$mount"]=349184 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # sizes["$mount"]=507904 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # uses["$mount"]=154624 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # read -r source fs size use avail _ mount 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # mounts["$mount"]=tmpfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # fss["$mount"]=tmpfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # avails["$mount"]=64677900288 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # sizes["$mount"]=64678260736 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # uses["$mount"]=360448 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # read -r source fs size use avail _ mount 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # mounts["$mount"]=tmpfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # fss["$mount"]=tmpfs 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # avails["$mount"]=12935639040 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # sizes["$mount"]=12935651328 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # uses["$mount"]=12288 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # read -r source fs size use avail _ mount 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # printf '* Looking for test storage...\n' 00:12:35.992 * Looking for test storage... 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # local target_space new_size 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # for target_dir in "${storage_candidates[@]}" 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # mount=/ 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # target_space=123294785536 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space == 0 || target_space < requested_size )) 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # (( target_space >= requested_size )) 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # [[ overlay == tmpfs ]] 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # [[ overlay == ramfs ]] 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # [[ / == / ]] 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # new_size=8276324352 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@396 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@403 -- # return 0 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1613 -- # set -o errtrace 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1614 -- # shopt -s extdebug 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1615 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1617 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:35.992 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1618 -- # true 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1620 -- # xtrace_fd 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1626 -- # lcov --version 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:12:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.993 --rc genhtml_branch_coverage=1 00:12:35.993 --rc genhtml_function_coverage=1 00:12:35.993 --rc genhtml_legend=1 00:12:35.993 --rc geninfo_all_blocks=1 00:12:35.993 --rc geninfo_unexecuted_blocks=1 00:12:35.993 00:12:35.993 ' 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:12:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.993 --rc genhtml_branch_coverage=1 00:12:35.993 --rc genhtml_function_coverage=1 00:12:35.993 --rc genhtml_legend=1 00:12:35.993 --rc geninfo_all_blocks=1 00:12:35.993 --rc geninfo_unexecuted_blocks=1 00:12:35.993 00:12:35.993 ' 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:12:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.993 --rc genhtml_branch_coverage=1 00:12:35.993 --rc genhtml_function_coverage=1 00:12:35.993 --rc genhtml_legend=1 00:12:35.993 --rc geninfo_all_blocks=1 00:12:35.993 --rc geninfo_unexecuted_blocks=1 00:12:35.993 00:12:35.993 ' 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:12:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.993 --rc genhtml_branch_coverage=1 00:12:35.993 --rc genhtml_function_coverage=1 00:12:35.993 --rc genhtml_legend=1 00:12:35.993 --rc geninfo_all_blocks=1 00:12:35.993 --rc geninfo_unexecuted_blocks=1 00:12:35.993 00:12:35.993 ' 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.993 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:35.994 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:44.139 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:44.139 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:44.139 Found net devices under 0000:31:00.0: cvl_0_0 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:44.139 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:44.140 Found net devices under 0000:31:00.1: cvl_0_1 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.140 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:12:44.140 00:12:44.140 --- 10.0.0.2 ping statistics --- 00:12:44.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.140 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:12:44.140 00:12:44.140 --- 10.0.0.1 ping statistics --- 00:12:44.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.140 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1110 -- # xtrace_disable 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:44.140 ************************************ 00:12:44.140 START TEST nvmf_filesystem_no_in_capsule 00:12:44.140 ************************************ 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # nvmf_filesystem_part 0 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3242036 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3242036 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # '[' -z 3242036 ']' 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local max_retries=100 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@843 -- # xtrace_disable 00:12:44.140 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.140 [2024-10-07 09:33:43.464316] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:12:44.140 [2024-10-07 09:33:43.464402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.140 [2024-10-07 09:33:43.557189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.140 [2024-10-07 09:33:43.651335] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.140 [2024-10-07 09:33:43.651401] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.140 [2024-10-07 09:33:43.651410] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.140 [2024-10-07 09:33:43.651417] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.140 [2024-10-07 09:33:43.651424] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.140 [2024-10-07 09:33:43.654014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.140 [2024-10-07 09:33:43.654180] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.140 [2024-10-07 09:33:43.654342] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.140 [2024-10-07 09:33:43.654343] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@867 -- # return 0 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@733 -- # xtrace_disable 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.714 [2024-10-07 09:33:44.327886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:44.714 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.975 Malloc1 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.975 [2024-10-07 09:33:44.479451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1366 -- # local bdev_name=Malloc1 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1367 -- # local bdev_info 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1368 -- # local bs 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1369 -- # local nb 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1370 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1370 -- # bdev_info='[ 00:12:44.975 { 00:12:44.975 "name": "Malloc1", 00:12:44.975 "aliases": [ 00:12:44.975 "498522e7-b408-49ad-a8ed-82e38b9da03f" 00:12:44.975 ], 00:12:44.975 "product_name": "Malloc disk", 00:12:44.975 "block_size": 512, 00:12:44.975 "num_blocks": 1048576, 00:12:44.975 "uuid": "498522e7-b408-49ad-a8ed-82e38b9da03f", 00:12:44.975 "assigned_rate_limits": { 00:12:44.975 "rw_ios_per_sec": 0, 00:12:44.975 "rw_mbytes_per_sec": 0, 00:12:44.975 "r_mbytes_per_sec": 0, 00:12:44.975 "w_mbytes_per_sec": 0 00:12:44.975 }, 00:12:44.975 "claimed": true, 00:12:44.975 "claim_type": "exclusive_write", 00:12:44.975 "zoned": false, 00:12:44.975 "supported_io_types": { 00:12:44.975 "read": true, 00:12:44.975 "write": true, 00:12:44.975 "unmap": true, 00:12:44.975 "flush": true, 00:12:44.975 "reset": true, 00:12:44.975 "nvme_admin": false, 00:12:44.975 "nvme_io": false, 00:12:44.975 "nvme_io_md": false, 00:12:44.975 "write_zeroes": true, 00:12:44.975 "zcopy": true, 00:12:44.975 "get_zone_info": false, 00:12:44.975 "zone_management": false, 00:12:44.975 "zone_append": false, 00:12:44.975 "compare": false, 00:12:44.975 "compare_and_write": false, 00:12:44.975 "abort": true, 00:12:44.975 "seek_hole": false, 00:12:44.975 "seek_data": false, 00:12:44.975 "copy": true, 00:12:44.975 "nvme_iov_md": false 00:12:44.975 }, 00:12:44.975 "memory_domains": [ 00:12:44.975 { 00:12:44.975 "dma_device_id": "system", 00:12:44.975 "dma_device_type": 1 00:12:44.975 }, 00:12:44.975 { 00:12:44.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.975 "dma_device_type": 2 00:12:44.975 } 00:12:44.975 ], 00:12:44.975 "driver_specific": {} 00:12:44.975 } 00:12:44.975 ]' 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1371 -- # jq '.[] .block_size' 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1371 -- # bs=512 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1372 -- # jq '.[] .num_blocks' 00:12:44.975 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1372 -- # nb=1048576 00:12:44.976 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # bdev_size=512 00:12:44.976 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # echo 512 00:12:44.976 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:44.976 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.894 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.894 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local i=0 00:12:46.894 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.894 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:12:46.894 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # sleep 2 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # return 0 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:48.814 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:49.075 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1110 -- # xtrace_disable 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.018 ************************************ 00:12:50.018 START TEST filesystem_ext4 00:12:50.018 ************************************ 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local fstype=ext4 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local dev_name=/dev/nvme0n1p1 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local i=0 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local force 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # '[' ext4 = ext4 ']' 00:12:50.018 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # force=-F 00:12:50.019 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@940 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:50.019 mke2fs 1.47.0 (5-Feb-2023) 00:12:50.019 Discarding device blocks: 0/522240 done 00:12:50.019 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:50.019 Filesystem UUID: f1910442-8b9c-4634-8141-267e2f6d2cb9 00:12:50.019 Superblock backups stored on blocks: 00:12:50.019 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:50.019 00:12:50.019 Allocating group tables: 0/64 done 00:12:50.019 Writing inode tables: 0/64 done 00:12:50.280 Creating journal (8192 blocks): done 00:12:52.609 Writing superblocks and filesystem accounting information: 0/64 done 00:12:52.609 00:12:52.609 09:33:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@948 -- # return 0 00:12:52.609 09:33:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3242036 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:59.196 00:12:59.196 real 0m8.627s 00:12:59.196 user 0m0.025s 00:12:59.196 sys 0m0.075s 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # xtrace_disable 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:59.196 ************************************ 00:12:59.196 END TEST filesystem_ext4 00:12:59.196 ************************************ 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1110 -- # xtrace_disable 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.196 ************************************ 00:12:59.196 START TEST filesystem_btrfs 00:12:59.196 ************************************ 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local fstype=btrfs 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local dev_name=/dev/nvme0n1p1 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local i=0 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local force 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # '[' btrfs = ext4 ']' 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # force=-f 00:12:59.196 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@940 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:59.196 btrfs-progs v6.8.1 00:12:59.196 See https://btrfs.readthedocs.io for more information. 00:12:59.196 00:12:59.196 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:59.196 NOTE: several default settings have changed in version 5.15, please make sure 00:12:59.196 this does not affect your deployments: 00:12:59.196 - DUP for metadata (-m dup) 00:12:59.196 - enabled no-holes (-O no-holes) 00:12:59.196 - enabled free-space-tree (-R free-space-tree) 00:12:59.196 00:12:59.196 Label: (null) 00:12:59.196 UUID: 74041940-66a8-4d50-92c1-90099894639b 00:12:59.196 Node size: 16384 00:12:59.196 Sector size: 4096 (CPU page size: 4096) 00:12:59.196 Filesystem size: 510.00MiB 00:12:59.196 Block group profiles: 00:12:59.196 Data: single 8.00MiB 00:12:59.196 Metadata: DUP 32.00MiB 00:12:59.196 System: DUP 8.00MiB 00:12:59.196 SSD detected: yes 00:12:59.197 Zoned device: no 00:12:59.197 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:59.197 Checksum: crc32c 00:12:59.197 Number of devices: 1 00:12:59.197 Devices: 00:12:59.197 ID SIZE PATH 00:12:59.197 1 510.00MiB /dev/nvme0n1p1 00:12:59.197 00:12:59.197 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@948 -- # return 0 00:12:59.197 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3242036 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:59.766 00:12:59.766 real 0m1.187s 00:12:59.766 user 0m0.027s 00:12:59.766 sys 0m0.122s 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # xtrace_disable 00:12:59.766 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:59.766 ************************************ 00:12:59.766 END TEST filesystem_btrfs 00:12:59.766 ************************************ 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1110 -- # xtrace_disable 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.027 ************************************ 00:13:00.027 START TEST filesystem_xfs 00:13:00.027 ************************************ 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # nvmf_filesystem_create xfs nvme0n1 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local fstype=xfs 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local dev_name=/dev/nvme0n1p1 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local i=0 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local force 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # '[' xfs = ext4 ']' 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # force=-f 00:13:00.027 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@940 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:00.027 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:00.027 = sectsz=512 attr=2, projid32bit=1 00:13:00.027 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:00.027 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:00.027 data = bsize=4096 blocks=130560, imaxpct=25 00:13:00.027 = sunit=0 swidth=0 blks 00:13:00.027 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:00.027 log =internal log bsize=4096 blocks=16384, version=2 00:13:00.027 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:00.027 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:00.971 Discarding blocks...Done. 00:13:00.971 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@948 -- # return 0 00:13:00.971 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:02.885 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:02.885 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:02.885 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:02.885 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3242036 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:03.146 00:13:03.146 real 0m3.110s 00:13:03.146 user 0m0.027s 00:13:03.146 sys 0m0.077s 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # xtrace_disable 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:03.146 ************************************ 00:13:03.146 END TEST filesystem_xfs 00:13:03.146 ************************************ 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:03.146 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # local i=0 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1234 -- # return 0 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3242036 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' -z 3242036 ']' 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # kill -0 3242036 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # uname 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3242036 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3242036' 00:13:03.406 killing process with pid 3242036 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # kill 3242036 00:13:03.406 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@977 -- # wait 3242036 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:03.669 00:13:03.669 real 0m19.741s 00:13:03.669 user 1m17.826s 00:13:03.669 sys 0m1.452s 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # xtrace_disable 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.669 ************************************ 00:13:03.669 END TEST nvmf_filesystem_no_in_capsule 00:13:03.669 ************************************ 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1110 -- # xtrace_disable 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:03.669 ************************************ 00:13:03.669 START TEST nvmf_filesystem_in_capsule 00:13:03.669 ************************************ 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # nvmf_filesystem_part 4096 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3245999 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3245999 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # '[' -z 3245999 ']' 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local max_retries=100 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@843 -- # xtrace_disable 00:13:03.669 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.669 [2024-10-07 09:34:03.280201] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:13:03.669 [2024-10-07 09:34:03.280254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.929 [2024-10-07 09:34:03.362780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.929 [2024-10-07 09:34:03.416943] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.929 [2024-10-07 09:34:03.416974] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.929 [2024-10-07 09:34:03.416979] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.929 [2024-10-07 09:34:03.416984] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.929 [2024-10-07 09:34:03.416988] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.929 [2024-10-07 09:34:03.418244] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.929 [2024-10-07 09:34:03.418367] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.929 [2024-10-07 09:34:03.418516] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.929 [2024-10-07 09:34:03.418518] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@867 -- # return 0 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@733 -- # xtrace_disable 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.501 [2024-10-07 09:34:04.124410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:04.501 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:04.502 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.762 Malloc1 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.762 [2024-10-07 09:34:04.246118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1366 -- # local bdev_name=Malloc1 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1367 -- # local bdev_info 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1368 -- # local bs 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1369 -- # local nb 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1370 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:04.762 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.763 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:04.763 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1370 -- # bdev_info='[ 00:13:04.763 { 00:13:04.763 "name": "Malloc1", 00:13:04.763 "aliases": [ 00:13:04.763 "ee4b7f43-52d1-4f86-be41-a1f0d4623ae6" 00:13:04.763 ], 00:13:04.763 "product_name": "Malloc disk", 00:13:04.763 "block_size": 512, 00:13:04.763 "num_blocks": 1048576, 00:13:04.763 "uuid": "ee4b7f43-52d1-4f86-be41-a1f0d4623ae6", 00:13:04.763 "assigned_rate_limits": { 00:13:04.763 "rw_ios_per_sec": 0, 00:13:04.763 "rw_mbytes_per_sec": 0, 00:13:04.763 "r_mbytes_per_sec": 0, 00:13:04.763 "w_mbytes_per_sec": 0 00:13:04.763 }, 00:13:04.763 "claimed": true, 00:13:04.763 "claim_type": "exclusive_write", 00:13:04.763 "zoned": false, 00:13:04.763 "supported_io_types": { 00:13:04.763 "read": true, 00:13:04.763 "write": true, 00:13:04.763 "unmap": true, 00:13:04.763 "flush": true, 00:13:04.763 "reset": true, 00:13:04.763 "nvme_admin": false, 00:13:04.763 "nvme_io": false, 00:13:04.763 "nvme_io_md": false, 00:13:04.763 "write_zeroes": true, 00:13:04.763 "zcopy": true, 00:13:04.763 "get_zone_info": false, 00:13:04.763 "zone_management": false, 00:13:04.763 "zone_append": false, 00:13:04.763 "compare": false, 00:13:04.763 "compare_and_write": false, 00:13:04.763 "abort": true, 00:13:04.763 "seek_hole": false, 00:13:04.763 "seek_data": false, 00:13:04.763 "copy": true, 00:13:04.763 "nvme_iov_md": false 00:13:04.763 }, 00:13:04.763 "memory_domains": [ 00:13:04.763 { 00:13:04.763 "dma_device_id": "system", 00:13:04.763 "dma_device_type": 1 00:13:04.763 }, 00:13:04.763 { 00:13:04.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.763 "dma_device_type": 2 00:13:04.763 } 00:13:04.763 ], 00:13:04.763 "driver_specific": {} 00:13:04.763 } 00:13:04.763 ]' 00:13:04.763 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1371 -- # jq '.[] .block_size' 00:13:04.763 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1371 -- # bs=512 00:13:04.763 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1372 -- # jq '.[] .num_blocks' 00:13:04.763 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1372 -- # nb=1048576 00:13:04.763 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # bdev_size=512 00:13:04.763 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # echo 512 00:13:04.763 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:04.763 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.673 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.673 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local i=0 00:13:06.673 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.673 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:13:06.673 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # sleep 2 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # return 0 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:08.583 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:08.584 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:08.584 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:08.584 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:08.584 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:08.845 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:09.105 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:10.045 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:10.045 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:10.045 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:13:10.045 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1110 -- # xtrace_disable 00:13:10.045 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.305 ************************************ 00:13:10.305 START TEST filesystem_in_capsule_ext4 00:13:10.305 ************************************ 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local fstype=ext4 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local dev_name=/dev/nvme0n1p1 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local i=0 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local force 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # '[' ext4 = ext4 ']' 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # force=-F 00:13:10.305 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@940 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:10.305 mke2fs 1.47.0 (5-Feb-2023) 00:13:10.305 Discarding device blocks: 0/522240 done 00:13:10.305 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:10.305 Filesystem UUID: d26780c2-a819-42b3-959b-008705f6e2cc 00:13:10.305 Superblock backups stored on blocks: 00:13:10.305 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:10.305 00:13:10.305 Allocating group tables: 0/64 done 00:13:10.305 Writing inode tables: 0/64 done 00:13:10.565 Creating journal (8192 blocks): done 00:13:12.786 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:13:12.786 00:13:12.786 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@948 -- # return 0 00:13:12.786 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3245999 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:19.371 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:19.371 00:13:19.371 real 0m8.964s 00:13:19.371 user 0m0.043s 00:13:19.371 sys 0m0.065s 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # xtrace_disable 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:19.372 ************************************ 00:13:19.372 END TEST filesystem_in_capsule_ext4 00:13:19.372 ************************************ 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1110 -- # xtrace_disable 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:19.372 ************************************ 00:13:19.372 START TEST filesystem_in_capsule_btrfs 00:13:19.372 ************************************ 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local fstype=btrfs 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local dev_name=/dev/nvme0n1p1 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local i=0 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local force 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # '[' btrfs = ext4 ']' 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # force=-f 00:13:19.372 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@940 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:19.634 btrfs-progs v6.8.1 00:13:19.634 See https://btrfs.readthedocs.io for more information. 00:13:19.634 00:13:19.634 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:19.634 NOTE: several default settings have changed in version 5.15, please make sure 00:13:19.634 this does not affect your deployments: 00:13:19.634 - DUP for metadata (-m dup) 00:13:19.634 - enabled no-holes (-O no-holes) 00:13:19.634 - enabled free-space-tree (-R free-space-tree) 00:13:19.634 00:13:19.634 Label: (null) 00:13:19.634 UUID: 58b87945-37b1-4aa6-94ee-a1675e56fc60 00:13:19.634 Node size: 16384 00:13:19.634 Sector size: 4096 (CPU page size: 4096) 00:13:19.634 Filesystem size: 510.00MiB 00:13:19.634 Block group profiles: 00:13:19.634 Data: single 8.00MiB 00:13:19.634 Metadata: DUP 32.00MiB 00:13:19.634 System: DUP 8.00MiB 00:13:19.634 SSD detected: yes 00:13:19.634 Zoned device: no 00:13:19.634 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:19.634 Checksum: crc32c 00:13:19.634 Number of devices: 1 00:13:19.634 Devices: 00:13:19.634 ID SIZE PATH 00:13:19.634 1 510.00MiB /dev/nvme0n1p1 00:13:19.634 00:13:19.634 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@948 -- # return 0 00:13:19.634 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:19.894 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:19.894 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:19.894 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3245999 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:20.156 00:13:20.156 real 0m0.857s 00:13:20.156 user 0m0.020s 00:13:20.156 sys 0m0.131s 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # xtrace_disable 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:20.156 ************************************ 00:13:20.156 END TEST filesystem_in_capsule_btrfs 00:13:20.156 ************************************ 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1110 -- # xtrace_disable 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.156 ************************************ 00:13:20.156 START TEST filesystem_in_capsule_xfs 00:13:20.156 ************************************ 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # nvmf_filesystem_create xfs nvme0n1 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:20.156 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:20.157 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:20.157 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local fstype=xfs 00:13:20.157 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local dev_name=/dev/nvme0n1p1 00:13:20.157 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local i=0 00:13:20.157 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local force 00:13:20.157 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # '[' xfs = ext4 ']' 00:13:20.157 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # force=-f 00:13:20.157 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@940 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:20.157 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:20.157 = sectsz=512 attr=2, projid32bit=1 00:13:20.157 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:20.157 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:20.157 data = bsize=4096 blocks=130560, imaxpct=25 00:13:20.157 = sunit=0 swidth=0 blks 00:13:20.157 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:20.157 log =internal log bsize=4096 blocks=16384, version=2 00:13:20.157 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:20.157 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:21.543 Discarding blocks...Done. 00:13:21.543 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@948 -- # return 0 00:13:21.543 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3245999 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:23.456 00:13:23.456 real 0m3.087s 00:13:23.456 user 0m0.031s 00:13:23.456 sys 0m0.075s 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # xtrace_disable 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:23.456 ************************************ 00:13:23.456 END TEST filesystem_in_capsule_xfs 00:13:23.456 ************************************ 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:23.456 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # local i=0 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1234 -- # return 0 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.456 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:23.457 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:23.457 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3245999 00:13:23.457 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' -z 3245999 ']' 00:13:23.457 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # kill -0 3245999 00:13:23.457 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # uname 00:13:23.457 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:13:23.457 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3245999 00:13:23.718 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:13:23.718 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:13:23.718 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3245999' 00:13:23.718 killing process with pid 3245999 00:13:23.718 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # kill 3245999 00:13:23.718 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@977 -- # wait 3245999 00:13:23.718 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:23.718 00:13:23.718 real 0m20.135s 00:13:23.718 user 1m19.650s 00:13:23.718 sys 0m1.387s 00:13:23.718 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # xtrace_disable 00:13:23.718 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.718 ************************************ 00:13:23.718 END TEST nvmf_filesystem_in_capsule 00:13:23.718 ************************************ 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.978 rmmod nvme_tcp 00:13:23.978 rmmod nvme_fabrics 00:13:23.978 rmmod nvme_keyring 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.978 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.891 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:25.891 00:13:25.891 real 0m50.596s 00:13:25.891 user 2m40.003s 00:13:25.891 sys 0m8.976s 00:13:26.152 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # xtrace_disable 00:13:26.152 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:26.152 ************************************ 00:13:26.152 END TEST nvmf_filesystem 00:13:26.152 ************************************ 00:13:26.152 09:34:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:26.152 09:34:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:13:26.152 09:34:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:13:26.152 09:34:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.152 ************************************ 00:13:26.152 START TEST nvmf_target_discovery 00:13:26.153 ************************************ 00:13:26.153 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:26.153 * Looking for test storage... 00:13:26.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.153 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:13:26.153 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1626 -- # lcov --version 00:13:26.153 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:26.414 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:13:26.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.415 --rc genhtml_branch_coverage=1 00:13:26.415 --rc genhtml_function_coverage=1 00:13:26.415 --rc genhtml_legend=1 00:13:26.415 --rc geninfo_all_blocks=1 00:13:26.415 --rc geninfo_unexecuted_blocks=1 00:13:26.415 00:13:26.415 ' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:13:26.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.415 --rc genhtml_branch_coverage=1 00:13:26.415 --rc genhtml_function_coverage=1 00:13:26.415 --rc genhtml_legend=1 00:13:26.415 --rc geninfo_all_blocks=1 00:13:26.415 --rc geninfo_unexecuted_blocks=1 00:13:26.415 00:13:26.415 ' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:13:26.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.415 --rc genhtml_branch_coverage=1 00:13:26.415 --rc genhtml_function_coverage=1 00:13:26.415 --rc genhtml_legend=1 00:13:26.415 --rc geninfo_all_blocks=1 00:13:26.415 --rc geninfo_unexecuted_blocks=1 00:13:26.415 00:13:26.415 ' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:13:26.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.415 --rc genhtml_branch_coverage=1 00:13:26.415 --rc genhtml_function_coverage=1 00:13:26.415 --rc genhtml_legend=1 00:13:26.415 --rc geninfo_all_blocks=1 00:13:26.415 --rc geninfo_unexecuted_blocks=1 00:13:26.415 00:13:26.415 ' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:26.415 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:34.555 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:34.555 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:34.555 Found net devices under 0000:31:00.0: cvl_0_0 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:34.555 Found net devices under 0000:31:00.1: cvl_0_1 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:13:34.555 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:34.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:13:34.556 00:13:34.556 --- 10.0.0.2 ping statistics --- 00:13:34.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.556 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:13:34.556 00:13:34.556 --- 10.0.0.1 ping statistics --- 00:13:34.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.556 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=3254603 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 3254603 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # '[' -z 3254603 ']' 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local max_retries=100 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@843 -- # xtrace_disable 00:13:34.556 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.556 [2024-10-07 09:34:33.670403] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:13:34.556 [2024-10-07 09:34:33.670499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.556 [2024-10-07 09:34:33.762101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.556 [2024-10-07 09:34:33.857623] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.556 [2024-10-07 09:34:33.857682] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.556 [2024-10-07 09:34:33.857692] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.556 [2024-10-07 09:34:33.857699] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.556 [2024-10-07 09:34:33.857705] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.556 [2024-10-07 09:34:33.859751] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.556 [2024-10-07 09:34:33.859912] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.556 [2024-10-07 09:34:33.860074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.556 [2024-10-07 09:34:33.860073] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@867 -- # return 0 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@733 -- # xtrace_disable 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 [2024-10-07 09:34:34.548691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 Null1 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 [2024-10-07 09:34:34.609167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 Null2 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:35.129 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 Null3 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 Null4 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.130 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:13:35.391 00:13:35.391 Discovery Log Number of Records 6, Generation counter 6 00:13:35.391 =====Discovery Log Entry 0====== 00:13:35.391 trtype: tcp 00:13:35.391 adrfam: ipv4 00:13:35.391 subtype: current discovery subsystem 00:13:35.391 treq: not required 00:13:35.391 portid: 0 00:13:35.391 trsvcid: 4420 00:13:35.391 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:35.391 traddr: 10.0.0.2 00:13:35.391 eflags: explicit discovery connections, duplicate discovery information 00:13:35.391 sectype: none 00:13:35.391 =====Discovery Log Entry 1====== 00:13:35.391 trtype: tcp 00:13:35.391 adrfam: ipv4 00:13:35.391 subtype: nvme subsystem 00:13:35.391 treq: not required 00:13:35.391 portid: 0 00:13:35.391 trsvcid: 4420 00:13:35.391 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:35.391 traddr: 10.0.0.2 00:13:35.391 eflags: none 00:13:35.391 sectype: none 00:13:35.391 =====Discovery Log Entry 2====== 00:13:35.391 trtype: tcp 00:13:35.391 adrfam: ipv4 00:13:35.391 subtype: nvme subsystem 00:13:35.391 treq: not required 00:13:35.391 portid: 0 00:13:35.391 trsvcid: 4420 00:13:35.391 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:35.391 traddr: 10.0.0.2 00:13:35.391 eflags: none 00:13:35.391 sectype: none 00:13:35.391 =====Discovery Log Entry 3====== 00:13:35.391 trtype: tcp 00:13:35.391 adrfam: ipv4 00:13:35.391 subtype: nvme subsystem 00:13:35.391 treq: not required 00:13:35.391 portid: 0 00:13:35.391 trsvcid: 4420 00:13:35.392 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:35.392 traddr: 10.0.0.2 00:13:35.392 eflags: none 00:13:35.392 sectype: none 00:13:35.392 =====Discovery Log Entry 4====== 00:13:35.392 trtype: tcp 00:13:35.392 adrfam: ipv4 00:13:35.392 subtype: nvme subsystem 00:13:35.392 treq: not required 00:13:35.392 portid: 0 00:13:35.392 trsvcid: 4420 00:13:35.392 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:35.392 traddr: 10.0.0.2 00:13:35.392 eflags: none 00:13:35.392 sectype: none 00:13:35.392 =====Discovery Log Entry 5====== 00:13:35.392 trtype: tcp 00:13:35.392 adrfam: ipv4 00:13:35.392 subtype: discovery subsystem referral 00:13:35.392 treq: not required 00:13:35.392 portid: 0 00:13:35.392 trsvcid: 4430 00:13:35.392 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:35.392 traddr: 10.0.0.2 00:13:35.392 eflags: none 00:13:35.392 sectype: none 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:35.392 Perform nvmf subsystem discovery via RPC 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.392 [ 00:13:35.392 { 00:13:35.392 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:35.392 "subtype": "Discovery", 00:13:35.392 "listen_addresses": [ 00:13:35.392 { 00:13:35.392 "trtype": "TCP", 00:13:35.392 "adrfam": "IPv4", 00:13:35.392 "traddr": "10.0.0.2", 00:13:35.392 "trsvcid": "4420" 00:13:35.392 } 00:13:35.392 ], 00:13:35.392 "allow_any_host": true, 00:13:35.392 "hosts": [] 00:13:35.392 }, 00:13:35.392 { 00:13:35.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.392 "subtype": "NVMe", 00:13:35.392 "listen_addresses": [ 00:13:35.392 { 00:13:35.392 "trtype": "TCP", 00:13:35.392 "adrfam": "IPv4", 00:13:35.392 "traddr": "10.0.0.2", 00:13:35.392 "trsvcid": "4420" 00:13:35.392 } 00:13:35.392 ], 00:13:35.392 "allow_any_host": true, 00:13:35.392 "hosts": [], 00:13:35.392 "serial_number": "SPDK00000000000001", 00:13:35.392 "model_number": "SPDK bdev Controller", 00:13:35.392 "max_namespaces": 32, 00:13:35.392 "min_cntlid": 1, 00:13:35.392 "max_cntlid": 65519, 00:13:35.392 "namespaces": [ 00:13:35.392 { 00:13:35.392 "nsid": 1, 00:13:35.392 "bdev_name": "Null1", 00:13:35.392 "name": "Null1", 00:13:35.392 "nguid": "309CDA8D774146188114473AAD11A8AD", 00:13:35.392 "uuid": "309cda8d-7741-4618-8114-473aad11a8ad" 00:13:35.392 } 00:13:35.392 ] 00:13:35.392 }, 00:13:35.392 { 00:13:35.392 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:35.392 "subtype": "NVMe", 00:13:35.392 "listen_addresses": [ 00:13:35.392 { 00:13:35.392 "trtype": "TCP", 00:13:35.392 "adrfam": "IPv4", 00:13:35.392 "traddr": "10.0.0.2", 00:13:35.392 "trsvcid": "4420" 00:13:35.392 } 00:13:35.392 ], 00:13:35.392 "allow_any_host": true, 00:13:35.392 "hosts": [], 00:13:35.392 "serial_number": "SPDK00000000000002", 00:13:35.392 "model_number": "SPDK bdev Controller", 00:13:35.392 "max_namespaces": 32, 00:13:35.392 "min_cntlid": 1, 00:13:35.392 "max_cntlid": 65519, 00:13:35.392 "namespaces": [ 00:13:35.392 { 00:13:35.392 "nsid": 1, 00:13:35.392 "bdev_name": "Null2", 00:13:35.392 "name": "Null2", 00:13:35.392 "nguid": "0304F47FE3B249DCA3DF49EA49610263", 00:13:35.392 "uuid": "0304f47f-e3b2-49dc-a3df-49ea49610263" 00:13:35.392 } 00:13:35.392 ] 00:13:35.392 }, 00:13:35.392 { 00:13:35.392 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:35.392 "subtype": "NVMe", 00:13:35.392 "listen_addresses": [ 00:13:35.392 { 00:13:35.392 "trtype": "TCP", 00:13:35.392 "adrfam": "IPv4", 00:13:35.392 "traddr": "10.0.0.2", 00:13:35.392 "trsvcid": "4420" 00:13:35.392 } 00:13:35.392 ], 00:13:35.392 "allow_any_host": true, 00:13:35.392 "hosts": [], 00:13:35.392 "serial_number": "SPDK00000000000003", 00:13:35.392 "model_number": "SPDK bdev Controller", 00:13:35.392 "max_namespaces": 32, 00:13:35.392 "min_cntlid": 1, 00:13:35.392 "max_cntlid": 65519, 00:13:35.392 "namespaces": [ 00:13:35.392 { 00:13:35.392 "nsid": 1, 00:13:35.392 "bdev_name": "Null3", 00:13:35.392 "name": "Null3", 00:13:35.392 "nguid": "F9AEF98F6D10414BB4BBBD37A711D940", 00:13:35.392 "uuid": "f9aef98f-6d10-414b-b4bb-bd37a711d940" 00:13:35.392 } 00:13:35.392 ] 00:13:35.392 }, 00:13:35.392 { 00:13:35.392 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:35.392 "subtype": "NVMe", 00:13:35.392 "listen_addresses": [ 00:13:35.392 { 00:13:35.392 "trtype": "TCP", 00:13:35.392 "adrfam": "IPv4", 00:13:35.392 "traddr": "10.0.0.2", 00:13:35.392 "trsvcid": "4420" 00:13:35.392 } 00:13:35.392 ], 00:13:35.392 "allow_any_host": true, 00:13:35.392 "hosts": [], 00:13:35.392 "serial_number": "SPDK00000000000004", 00:13:35.392 "model_number": "SPDK bdev Controller", 00:13:35.392 "max_namespaces": 32, 00:13:35.392 "min_cntlid": 1, 00:13:35.392 "max_cntlid": 65519, 00:13:35.392 "namespaces": [ 00:13:35.392 { 00:13:35.392 "nsid": 1, 00:13:35.392 "bdev_name": "Null4", 00:13:35.392 "name": "Null4", 00:13:35.392 "nguid": "75327F2288124292AD103700ED6E8094", 00:13:35.392 "uuid": "75327f22-8812-4292-ad10-3700ed6e8094" 00:13:35.392 } 00:13:35.392 ] 00:13:35.392 } 00:13:35.392 ] 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.392 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.653 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.653 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:35.653 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:35.653 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.653 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.653 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:35.654 rmmod nvme_tcp 00:13:35.654 rmmod nvme_fabrics 00:13:35.654 rmmod nvme_keyring 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 3254603 ']' 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 3254603 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' -z 3254603 ']' 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # kill -0 3254603 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # uname 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:13:35.654 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3254603 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3254603' 00:13:35.915 killing process with pid 3254603 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # kill 3254603 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@977 -- # wait 3254603 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.915 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:38.461 00:13:38.461 real 0m11.971s 00:13:38.461 user 0m9.047s 00:13:38.461 sys 0m6.249s 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # xtrace_disable 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 ************************************ 00:13:38.461 END TEST nvmf_target_discovery 00:13:38.461 ************************************ 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:38.461 ************************************ 00:13:38.461 START TEST nvmf_referrals 00:13:38.461 ************************************ 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:38.461 * Looking for test storage... 00:13:38.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1626 -- # lcov --version 00:13:38.461 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:13:38.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.462 --rc genhtml_branch_coverage=1 00:13:38.462 --rc genhtml_function_coverage=1 00:13:38.462 --rc genhtml_legend=1 00:13:38.462 --rc geninfo_all_blocks=1 00:13:38.462 --rc geninfo_unexecuted_blocks=1 00:13:38.462 00:13:38.462 ' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:13:38.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.462 --rc genhtml_branch_coverage=1 00:13:38.462 --rc genhtml_function_coverage=1 00:13:38.462 --rc genhtml_legend=1 00:13:38.462 --rc geninfo_all_blocks=1 00:13:38.462 --rc geninfo_unexecuted_blocks=1 00:13:38.462 00:13:38.462 ' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:13:38.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.462 --rc genhtml_branch_coverage=1 00:13:38.462 --rc genhtml_function_coverage=1 00:13:38.462 --rc genhtml_legend=1 00:13:38.462 --rc geninfo_all_blocks=1 00:13:38.462 --rc geninfo_unexecuted_blocks=1 00:13:38.462 00:13:38.462 ' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:13:38.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.462 --rc genhtml_branch_coverage=1 00:13:38.462 --rc genhtml_function_coverage=1 00:13:38.462 --rc genhtml_legend=1 00:13:38.462 --rc geninfo_all_blocks=1 00:13:38.462 --rc geninfo_unexecuted_blocks=1 00:13:38.462 00:13:38.462 ' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:38.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:38.462 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:38.463 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:46.601 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:46.601 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:46.601 Found net devices under 0000:31:00.0: cvl_0_0 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:46.601 Found net devices under 0000:31:00.1: cvl_0_1 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:46.601 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:46.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:13:46.602 00:13:46.602 --- 10.0.0.2 ping statistics --- 00:13:46.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.602 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:13:46.602 00:13:46.602 --- 10.0.0.1 ping statistics --- 00:13:46.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.602 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=3259331 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 3259331 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # '[' -z 3259331 ']' 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local max_retries=100 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@843 -- # xtrace_disable 00:13:46.602 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:46.602 [2024-10-07 09:34:45.789979] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:13:46.602 [2024-10-07 09:34:45.790067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.602 [2024-10-07 09:34:45.883262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.602 [2024-10-07 09:34:45.977462] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.602 [2024-10-07 09:34:45.977533] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.602 [2024-10-07 09:34:45.977543] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.602 [2024-10-07 09:34:45.977550] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.602 [2024-10-07 09:34:45.977556] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.602 [2024-10-07 09:34:45.979662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.602 [2024-10-07 09:34:45.979794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.602 [2024-10-07 09:34:45.979966] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.602 [2024-10-07 09:34:45.979967] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@867 -- # return 0 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@733 -- # xtrace_disable 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.175 [2024-10-07 09:34:46.671651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.175 [2024-10-07 09:34:46.687983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:47.175 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:47.176 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:47.438 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:47.438 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.699 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.699 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:47.699 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:47.699 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:47.699 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:47.699 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:47.699 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:47.699 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:47.960 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:48.220 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:48.482 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:48.743 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:48.743 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:48.743 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:48.743 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:48.743 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:48.743 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:48.743 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@564 -- # xtrace_disable 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.003 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:13:49.263 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:49.263 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:49.263 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.263 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.263 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.263 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.263 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.523 rmmod nvme_tcp 00:13:49.523 rmmod nvme_fabrics 00:13:49.523 rmmod nvme_keyring 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 3259331 ']' 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 3259331 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' -z 3259331 ']' 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # kill -0 3259331 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # uname 00:13:49.523 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:13:49.523 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3259331 00:13:49.523 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:13:49.523 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:13:49.523 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3259331' 00:13:49.523 killing process with pid 3259331 00:13:49.523 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # kill 3259331 00:13:49.523 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@977 -- # wait 3259331 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.784 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.697 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:51.697 00:13:51.697 real 0m13.604s 00:13:51.697 user 0m16.089s 00:13:51.697 sys 0m6.728s 00:13:51.697 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # xtrace_disable 00:13:51.698 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:51.698 ************************************ 00:13:51.698 END TEST nvmf_referrals 00:13:51.698 ************************************ 00:13:51.698 09:34:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:51.698 09:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:13:51.698 09:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:13:51.698 09:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.957 ************************************ 00:13:51.957 START TEST nvmf_connect_disconnect 00:13:51.957 ************************************ 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:51.957 * Looking for test storage... 00:13:51.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1626 -- # lcov --version 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:51.957 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:13:51.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.958 --rc genhtml_branch_coverage=1 00:13:51.958 --rc genhtml_function_coverage=1 00:13:51.958 --rc genhtml_legend=1 00:13:51.958 --rc geninfo_all_blocks=1 00:13:51.958 --rc geninfo_unexecuted_blocks=1 00:13:51.958 00:13:51.958 ' 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:13:51.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.958 --rc genhtml_branch_coverage=1 00:13:51.958 --rc genhtml_function_coverage=1 00:13:51.958 --rc genhtml_legend=1 00:13:51.958 --rc geninfo_all_blocks=1 00:13:51.958 --rc geninfo_unexecuted_blocks=1 00:13:51.958 00:13:51.958 ' 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:13:51.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.958 --rc genhtml_branch_coverage=1 00:13:51.958 --rc genhtml_function_coverage=1 00:13:51.958 --rc genhtml_legend=1 00:13:51.958 --rc geninfo_all_blocks=1 00:13:51.958 --rc geninfo_unexecuted_blocks=1 00:13:51.958 00:13:51.958 ' 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:13:51.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.958 --rc genhtml_branch_coverage=1 00:13:51.958 --rc genhtml_function_coverage=1 00:13:51.958 --rc genhtml_legend=1 00:13:51.958 --rc geninfo_all_blocks=1 00:13:51.958 --rc geninfo_unexecuted_blocks=1 00:13:51.958 00:13:51.958 ' 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.958 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.218 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:00.410 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:00.410 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.410 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:00.411 Found net devices under 0000:31:00.0: cvl_0_0 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:00.411 Found net devices under 0000:31:00.1: cvl_0_1 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:14:00.411 00:14:00.411 --- 10.0.0.2 ping statistics --- 00:14:00.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.411 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:14:00.411 00:14:00.411 --- 10.0.0.1 ping statistics --- 00:14:00.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.411 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=3264263 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 3264263 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # '[' -z 3264263 ']' 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local max_retries=100 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@843 -- # xtrace_disable 00:14:00.411 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.411 [2024-10-07 09:34:59.505496] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:14:00.411 [2024-10-07 09:34:59.505569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.411 [2024-10-07 09:34:59.601002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.411 [2024-10-07 09:34:59.695973] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.411 [2024-10-07 09:34:59.696035] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.411 [2024-10-07 09:34:59.696044] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.411 [2024-10-07 09:34:59.696051] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.411 [2024-10-07 09:34:59.696058] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.411 [2024-10-07 09:34:59.698506] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.411 [2024-10-07 09:34:59.698723] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.411 [2024-10-07 09:34:59.699053] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.411 [2024-10-07 09:34:59.699056] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.732 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:14:00.732 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@867 -- # return 0 00:14:00.732 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:00.732 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@733 -- # xtrace_disable 00:14:00.732 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.032 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.032 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:01.032 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:01.032 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.032 [2024-10-07 09:35:00.383652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.032 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:01.032 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:01.032 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:01.033 [2024-10-07 09:35:00.453434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:01.033 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:04.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:19.497 rmmod nvme_tcp 00:14:19.497 rmmod nvme_fabrics 00:14:19.497 rmmod nvme_keyring 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 3264263 ']' 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 3264263 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' -z 3264263 ']' 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # kill -0 3264263 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # uname 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3264263 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3264263' 00:14:19.497 killing process with pid 3264263 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # kill 3264263 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@977 -- # wait 3264263 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.497 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.411 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:21.411 00:14:21.411 real 0m29.656s 00:14:21.411 user 1m18.834s 00:14:21.411 sys 0m7.410s 00:14:21.411 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # xtrace_disable 00:14:21.411 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:21.411 ************************************ 00:14:21.411 END TEST nvmf_connect_disconnect 00:14:21.411 ************************************ 00:14:21.411 09:35:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:21.411 09:35:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:14:21.411 09:35:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:14:21.411 09:35:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.672 ************************************ 00:14:21.672 START TEST nvmf_multitarget 00:14:21.672 ************************************ 00:14:21.672 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:21.672 * Looking for test storage... 00:14:21.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.672 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:14:21.672 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1626 -- # lcov --version 00:14:21.672 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:14:21.672 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:14:21.672 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.672 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.672 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.672 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.672 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.673 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:14:21.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.936 --rc genhtml_branch_coverage=1 00:14:21.936 --rc genhtml_function_coverage=1 00:14:21.936 --rc genhtml_legend=1 00:14:21.936 --rc geninfo_all_blocks=1 00:14:21.936 --rc geninfo_unexecuted_blocks=1 00:14:21.936 00:14:21.936 ' 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:14:21.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.936 --rc genhtml_branch_coverage=1 00:14:21.936 --rc genhtml_function_coverage=1 00:14:21.936 --rc genhtml_legend=1 00:14:21.936 --rc geninfo_all_blocks=1 00:14:21.936 --rc geninfo_unexecuted_blocks=1 00:14:21.936 00:14:21.936 ' 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:14:21.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.936 --rc genhtml_branch_coverage=1 00:14:21.936 --rc genhtml_function_coverage=1 00:14:21.936 --rc genhtml_legend=1 00:14:21.936 --rc geninfo_all_blocks=1 00:14:21.936 --rc geninfo_unexecuted_blocks=1 00:14:21.936 00:14:21.936 ' 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:14:21.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.936 --rc genhtml_branch_coverage=1 00:14:21.936 --rc genhtml_function_coverage=1 00:14:21.936 --rc genhtml_legend=1 00:14:21.936 --rc geninfo_all_blocks=1 00:14:21.936 --rc geninfo_unexecuted_blocks=1 00:14:21.936 00:14:21.936 ' 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.936 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:21.937 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:30.084 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:30.084 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:30.084 Found net devices under 0000:31:00.0: cvl_0_0 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:30.084 Found net devices under 0000:31:00.1: cvl_0_1 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.084 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:30.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:14:30.085 00:14:30.085 --- 10.0.0.2 ping statistics --- 00:14:30.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.085 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:14:30.085 00:14:30.085 --- 10.0.0.1 ping statistics --- 00:14:30.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.085 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:30.085 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=3272431 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 3272431 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # '[' -z 3272431 ']' 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local max_retries=100 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@843 -- # xtrace_disable 00:14:30.085 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.085 [2024-10-07 09:35:29.090125] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:14:30.085 [2024-10-07 09:35:29.090184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.085 [2024-10-07 09:35:29.177627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.085 [2024-10-07 09:35:29.259434] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.085 [2024-10-07 09:35:29.259486] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.085 [2024-10-07 09:35:29.259495] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.085 [2024-10-07 09:35:29.259502] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.085 [2024-10-07 09:35:29.259509] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.085 [2024-10-07 09:35:29.261899] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.085 [2024-10-07 09:35:29.262121] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.085 [2024-10-07 09:35:29.262278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.085 [2024-10-07 09:35:29.262280] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.343 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:14:30.343 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@867 -- # return 0 00:14:30.343 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:30.343 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@733 -- # xtrace_disable 00:14:30.343 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.343 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.343 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:30.343 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:30.343 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:30.602 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:30.602 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:30.602 "nvmf_tgt_1" 00:14:30.602 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:30.602 "nvmf_tgt_2" 00:14:30.602 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:30.602 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:30.861 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:30.861 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:30.861 true 00:14:30.861 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:31.121 true 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:31.121 rmmod nvme_tcp 00:14:31.121 rmmod nvme_fabrics 00:14:31.121 rmmod nvme_keyring 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 3272431 ']' 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 3272431 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' -z 3272431 ']' 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # kill -0 3272431 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # uname 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:14:31.121 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3272431 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3272431' 00:14:31.381 killing process with pid 3272431 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # kill 3272431 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@977 -- # wait 3272431 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.381 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:33.928 00:14:33.928 real 0m11.909s 00:14:33.928 user 0m9.601s 00:14:33.928 sys 0m6.304s 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # xtrace_disable 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:33.928 ************************************ 00:14:33.928 END TEST nvmf_multitarget 00:14:33.928 ************************************ 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.928 ************************************ 00:14:33.928 START TEST nvmf_rpc 00:14:33.928 ************************************ 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:33.928 * Looking for test storage... 00:14:33.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1626 -- # lcov --version 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:14:33.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.928 --rc genhtml_branch_coverage=1 00:14:33.928 --rc genhtml_function_coverage=1 00:14:33.928 --rc genhtml_legend=1 00:14:33.928 --rc geninfo_all_blocks=1 00:14:33.928 --rc geninfo_unexecuted_blocks=1 00:14:33.928 00:14:33.928 ' 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:14:33.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.928 --rc genhtml_branch_coverage=1 00:14:33.928 --rc genhtml_function_coverage=1 00:14:33.928 --rc genhtml_legend=1 00:14:33.928 --rc geninfo_all_blocks=1 00:14:33.928 --rc geninfo_unexecuted_blocks=1 00:14:33.928 00:14:33.928 ' 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:14:33.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.928 --rc genhtml_branch_coverage=1 00:14:33.928 --rc genhtml_function_coverage=1 00:14:33.928 --rc genhtml_legend=1 00:14:33.928 --rc geninfo_all_blocks=1 00:14:33.928 --rc geninfo_unexecuted_blocks=1 00:14:33.928 00:14:33.928 ' 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:14:33.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.928 --rc genhtml_branch_coverage=1 00:14:33.928 --rc genhtml_function_coverage=1 00:14:33.928 --rc genhtml_legend=1 00:14:33.928 --rc geninfo_all_blocks=1 00:14:33.928 --rc geninfo_unexecuted_blocks=1 00:14:33.928 00:14:33.928 ' 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.928 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:33.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:33.929 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:42.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:42.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:42.074 Found net devices under 0000:31:00.0: cvl_0_0 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:42.074 Found net devices under 0000:31:00.1: cvl_0_1 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.074 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:42.075 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:42.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:14:42.075 00:14:42.075 --- 10.0.0.2 ping statistics --- 00:14:42.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.075 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:14:42.075 00:14:42.075 --- 10.0.0.1 ping statistics --- 00:14:42.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.075 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=3277203 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 3277203 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # '[' -z 3277203 ']' 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:14:42.075 09:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.075 [2024-10-07 09:35:41.202289] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:14:42.075 [2024-10-07 09:35:41.202349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.075 [2024-10-07 09:35:41.295872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.075 [2024-10-07 09:35:41.390685] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.075 [2024-10-07 09:35:41.390755] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.075 [2024-10-07 09:35:41.390763] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.075 [2024-10-07 09:35:41.390771] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.075 [2024-10-07 09:35:41.390777] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.075 [2024-10-07 09:35:41.392924] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.075 [2024-10-07 09:35:41.393170] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.075 [2024-10-07 09:35:41.393321] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.075 [2024-10-07 09:35:41.393323] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@867 -- # return 0 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@733 -- # xtrace_disable 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:42.650 "tick_rate": 2400000000, 00:14:42.650 "poll_groups": [ 00:14:42.650 { 00:14:42.650 "name": "nvmf_tgt_poll_group_000", 00:14:42.650 "admin_qpairs": 0, 00:14:42.650 "io_qpairs": 0, 00:14:42.650 "current_admin_qpairs": 0, 00:14:42.650 "current_io_qpairs": 0, 00:14:42.650 "pending_bdev_io": 0, 00:14:42.650 "completed_nvme_io": 0, 00:14:42.650 "transports": [] 00:14:42.650 }, 00:14:42.650 { 00:14:42.650 "name": "nvmf_tgt_poll_group_001", 00:14:42.650 "admin_qpairs": 0, 00:14:42.650 "io_qpairs": 0, 00:14:42.650 "current_admin_qpairs": 0, 00:14:42.650 "current_io_qpairs": 0, 00:14:42.650 "pending_bdev_io": 0, 00:14:42.650 "completed_nvme_io": 0, 00:14:42.650 "transports": [] 00:14:42.650 }, 00:14:42.650 { 00:14:42.650 "name": "nvmf_tgt_poll_group_002", 00:14:42.650 "admin_qpairs": 0, 00:14:42.650 "io_qpairs": 0, 00:14:42.650 "current_admin_qpairs": 0, 00:14:42.650 "current_io_qpairs": 0, 00:14:42.650 "pending_bdev_io": 0, 00:14:42.650 "completed_nvme_io": 0, 00:14:42.650 "transports": [] 00:14:42.650 }, 00:14:42.650 { 00:14:42.650 "name": "nvmf_tgt_poll_group_003", 00:14:42.650 "admin_qpairs": 0, 00:14:42.650 "io_qpairs": 0, 00:14:42.650 "current_admin_qpairs": 0, 00:14:42.650 "current_io_qpairs": 0, 00:14:42.650 "pending_bdev_io": 0, 00:14:42.650 "completed_nvme_io": 0, 00:14:42.650 "transports": [] 00:14:42.650 } 00:14:42.650 ] 00:14:42.650 }' 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.650 [2024-10-07 09:35:42.198208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:42.650 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:42.650 "tick_rate": 2400000000, 00:14:42.650 "poll_groups": [ 00:14:42.650 { 00:14:42.650 "name": "nvmf_tgt_poll_group_000", 00:14:42.650 "admin_qpairs": 0, 00:14:42.650 "io_qpairs": 0, 00:14:42.650 "current_admin_qpairs": 0, 00:14:42.650 "current_io_qpairs": 0, 00:14:42.650 "pending_bdev_io": 0, 00:14:42.650 "completed_nvme_io": 0, 00:14:42.650 "transports": [ 00:14:42.650 { 00:14:42.650 "trtype": "TCP" 00:14:42.650 } 00:14:42.650 ] 00:14:42.650 }, 00:14:42.650 { 00:14:42.650 "name": "nvmf_tgt_poll_group_001", 00:14:42.650 "admin_qpairs": 0, 00:14:42.650 "io_qpairs": 0, 00:14:42.650 "current_admin_qpairs": 0, 00:14:42.650 "current_io_qpairs": 0, 00:14:42.651 "pending_bdev_io": 0, 00:14:42.651 "completed_nvme_io": 0, 00:14:42.651 "transports": [ 00:14:42.651 { 00:14:42.651 "trtype": "TCP" 00:14:42.651 } 00:14:42.651 ] 00:14:42.651 }, 00:14:42.651 { 00:14:42.651 "name": "nvmf_tgt_poll_group_002", 00:14:42.651 "admin_qpairs": 0, 00:14:42.651 "io_qpairs": 0, 00:14:42.651 "current_admin_qpairs": 0, 00:14:42.651 "current_io_qpairs": 0, 00:14:42.651 "pending_bdev_io": 0, 00:14:42.651 "completed_nvme_io": 0, 00:14:42.651 "transports": [ 00:14:42.651 { 00:14:42.651 "trtype": "TCP" 00:14:42.651 } 00:14:42.651 ] 00:14:42.651 }, 00:14:42.651 { 00:14:42.651 "name": "nvmf_tgt_poll_group_003", 00:14:42.651 "admin_qpairs": 0, 00:14:42.651 "io_qpairs": 0, 00:14:42.651 "current_admin_qpairs": 0, 00:14:42.651 "current_io_qpairs": 0, 00:14:42.651 "pending_bdev_io": 0, 00:14:42.651 "completed_nvme_io": 0, 00:14:42.651 "transports": [ 00:14:42.651 { 00:14:42.651 "trtype": "TCP" 00:14:42.651 } 00:14:42.651 ] 00:14:42.651 } 00:14:42.651 ] 00:14:42.651 }' 00:14:42.651 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:42.651 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:42.651 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:42.651 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:42.651 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:42.651 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:42.651 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:42.651 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:42.651 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.913 Malloc1 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.913 [2024-10-07 09:35:42.392526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # local es=0 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@641 -- # local arg=nvme 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # type -t nvme 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@647 -- # type -P nvme 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@647 -- # arg=/usr/sbin/nvme 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@647 -- # [[ -x /usr/sbin/nvme ]] 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@656 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:14:42.913 [2024-10-07 09:35:42.429665] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:14:42.913 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:42.913 could not add new controller: failed to write to nvme-fabrics device 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@656 -- # es=1 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:42.913 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:44.831 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:44.831 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local i=0 00:14:44.831 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.831 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:14:44.831 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # sleep 2 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # return 0 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # local i=0 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1234 -- # return 0 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.742 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # local es=0 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@641 -- # local arg=nvme 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # type -t nvme 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@647 -- # type -P nvme 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@647 -- # arg=/usr/sbin/nvme 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@647 -- # [[ -x /usr/sbin/nvme ]] 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@656 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.743 [2024-10-07 09:35:46.213779] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:14:46.743 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:46.743 could not add new controller: failed to write to nvme-fabrics device 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@656 -- # es=1 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:46.743 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.654 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:48.655 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local i=0 00:14:48.655 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.655 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:14:48.655 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # sleep 2 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # return 0 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # local i=0 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1234 -- # return 0 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.572 [2024-10-07 09:35:49.989006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:50.572 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.572 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:50.572 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:50.572 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:50.572 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.572 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:50.572 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.956 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:51.956 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local i=0 00:14:51.956 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.956 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:14:51.956 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # sleep 2 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # return 0 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # local i=0 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1234 -- # return 0 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.505 [2024-10-07 09:35:53.780356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:54.505 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:55.893 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:55.893 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local i=0 00:14:55.893 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.893 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:14:55.893 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # sleep 2 00:14:57.810 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:14:57.810 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:14:57.810 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:14:57.810 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:14:57.810 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.810 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # return 0 00:14:57.810 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.810 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:57.810 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # local i=0 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1234 -- # return 0 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.071 [2024-10-07 09:35:57.542256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.071 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:58.072 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:58.072 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:14:58.072 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.072 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:14:58.072 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.456 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:59.456 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local i=0 00:14:59.456 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.456 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:14:59.456 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # sleep 2 00:15:02.002 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:15:02.002 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:15:02.002 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.002 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # return 0 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # local i=0 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1234 -- # return 0 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.003 [2024-10-07 09:36:01.219264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:02.003 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.387 09:36:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.387 09:36:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local i=0 00:15:03.387 09:36:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.387 09:36:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:15:03.387 09:36:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # sleep 2 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # return 0 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # local i=0 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1234 -- # return 0 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.301 [2024-10-07 09:36:04.938228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:05.301 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.562 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:05.562 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:06.943 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.943 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local i=0 00:15:06.943 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.943 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:15:06.943 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # sleep 2 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # return 0 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # local i=0 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1234 -- # return 0 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:09.485 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 [2024-10-07 09:36:08.704256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 [2024-10-07 09:36:08.772420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 [2024-10-07 09:36:08.836579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 [2024-10-07 09:36:08.908815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.486 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 [2024-10-07 09:36:08.981038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.487 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:09.487 "tick_rate": 2400000000, 00:15:09.487 "poll_groups": [ 00:15:09.487 { 00:15:09.487 "name": "nvmf_tgt_poll_group_000", 00:15:09.487 "admin_qpairs": 0, 00:15:09.487 "io_qpairs": 224, 00:15:09.487 "current_admin_qpairs": 0, 00:15:09.487 "current_io_qpairs": 0, 00:15:09.487 "pending_bdev_io": 0, 00:15:09.487 "completed_nvme_io": 347, 00:15:09.487 "transports": [ 00:15:09.487 { 00:15:09.487 "trtype": "TCP" 00:15:09.487 } 00:15:09.487 ] 00:15:09.487 }, 00:15:09.487 { 00:15:09.487 "name": "nvmf_tgt_poll_group_001", 00:15:09.487 "admin_qpairs": 1, 00:15:09.487 "io_qpairs": 223, 00:15:09.487 "current_admin_qpairs": 0, 00:15:09.487 "current_io_qpairs": 0, 00:15:09.487 "pending_bdev_io": 0, 00:15:09.487 "completed_nvme_io": 447, 00:15:09.487 "transports": [ 00:15:09.487 { 00:15:09.487 "trtype": "TCP" 00:15:09.487 } 00:15:09.487 ] 00:15:09.487 }, 00:15:09.487 { 00:15:09.487 "name": "nvmf_tgt_poll_group_002", 00:15:09.487 "admin_qpairs": 6, 00:15:09.487 "io_qpairs": 218, 00:15:09.487 "current_admin_qpairs": 0, 00:15:09.487 "current_io_qpairs": 0, 00:15:09.487 "pending_bdev_io": 0, 00:15:09.487 "completed_nvme_io": 218, 00:15:09.487 "transports": [ 00:15:09.487 { 00:15:09.487 "trtype": "TCP" 00:15:09.487 } 00:15:09.487 ] 00:15:09.487 }, 00:15:09.487 { 00:15:09.487 "name": "nvmf_tgt_poll_group_003", 00:15:09.487 "admin_qpairs": 0, 00:15:09.487 "io_qpairs": 224, 00:15:09.487 "current_admin_qpairs": 0, 00:15:09.487 "current_io_qpairs": 0, 00:15:09.487 "pending_bdev_io": 0, 00:15:09.487 "completed_nvme_io": 227, 00:15:09.487 "transports": [ 00:15:09.487 { 00:15:09.487 "trtype": "TCP" 00:15:09.487 } 00:15:09.487 ] 00:15:09.487 } 00:15:09.487 ] 00:15:09.487 }' 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:09.487 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:09.748 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:09.749 rmmod nvme_tcp 00:15:09.749 rmmod nvme_fabrics 00:15:09.749 rmmod nvme_keyring 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 3277203 ']' 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 3277203 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' -z 3277203 ']' 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # kill -0 3277203 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # uname 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3277203 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3277203' 00:15:09.749 killing process with pid 3277203 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # kill 3277203 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@977 -- # wait 3277203 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:09.749 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:15:10.010 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:10.010 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:10.010 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.010 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.010 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.924 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:11.924 00:15:11.924 real 0m38.384s 00:15:11.924 user 1m54.143s 00:15:11.924 sys 0m8.067s 00:15:11.924 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:11.924 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.924 ************************************ 00:15:11.924 END TEST nvmf_rpc 00:15:11.924 ************************************ 00:15:11.924 09:36:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:11.924 09:36:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:15:11.924 09:36:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:11.924 09:36:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.924 ************************************ 00:15:11.924 START TEST nvmf_invalid 00:15:11.924 ************************************ 00:15:11.924 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:12.185 * Looking for test storage... 00:15:12.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.185 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:15:12.185 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1626 -- # lcov --version 00:15:12.185 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:15:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.186 --rc genhtml_branch_coverage=1 00:15:12.186 --rc genhtml_function_coverage=1 00:15:12.186 --rc genhtml_legend=1 00:15:12.186 --rc geninfo_all_blocks=1 00:15:12.186 --rc geninfo_unexecuted_blocks=1 00:15:12.186 00:15:12.186 ' 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:15:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.186 --rc genhtml_branch_coverage=1 00:15:12.186 --rc genhtml_function_coverage=1 00:15:12.186 --rc genhtml_legend=1 00:15:12.186 --rc geninfo_all_blocks=1 00:15:12.186 --rc geninfo_unexecuted_blocks=1 00:15:12.186 00:15:12.186 ' 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:15:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.186 --rc genhtml_branch_coverage=1 00:15:12.186 --rc genhtml_function_coverage=1 00:15:12.186 --rc genhtml_legend=1 00:15:12.186 --rc geninfo_all_blocks=1 00:15:12.186 --rc geninfo_unexecuted_blocks=1 00:15:12.186 00:15:12.186 ' 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:15:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.186 --rc genhtml_branch_coverage=1 00:15:12.186 --rc genhtml_function_coverage=1 00:15:12.186 --rc genhtml_legend=1 00:15:12.186 --rc geninfo_all_blocks=1 00:15:12.186 --rc geninfo_unexecuted_blocks=1 00:15:12.186 00:15:12.186 ' 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.186 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.187 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.187 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:12.447 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:12.448 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:12.448 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:20.598 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:20.599 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:20.599 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:20.599 Found net devices under 0000:31:00.0: cvl_0_0 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:20.599 Found net devices under 0000:31:00.1: cvl_0_1 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:20.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:15:20.599 00:15:20.599 --- 10.0.0.2 ping statistics --- 00:15:20.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.599 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:20.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:15:20.599 00:15:20.599 --- 10.0.0.1 ping statistics --- 00:15:20.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.599 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=3287679 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 3287679 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # '[' -z 3287679 ']' 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local max_retries=100 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@843 -- # xtrace_disable 00:15:20.599 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:20.599 [2024-10-07 09:36:19.699578] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:20.599 [2024-10-07 09:36:19.699647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.599 [2024-10-07 09:36:19.794638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.599 [2024-10-07 09:36:19.889920] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.599 [2024-10-07 09:36:19.889988] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.599 [2024-10-07 09:36:19.889997] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.600 [2024-10-07 09:36:19.890004] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.600 [2024-10-07 09:36:19.890011] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.600 [2024-10-07 09:36:19.892251] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.600 [2024-10-07 09:36:19.892417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.600 [2024-10-07 09:36:19.892579] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.600 [2024-10-07 09:36:19.892581] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@867 -- # return 0 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@733 -- # xtrace_disable 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25128 00:15:21.173 [2024-10-07 09:36:20.738492] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:21.173 { 00:15:21.173 "nqn": "nqn.2016-06.io.spdk:cnode25128", 00:15:21.173 "tgt_name": "foobar", 00:15:21.173 "method": "nvmf_create_subsystem", 00:15:21.173 "req_id": 1 00:15:21.173 } 00:15:21.173 Got JSON-RPC error response 00:15:21.173 response: 00:15:21.173 { 00:15:21.173 "code": -32603, 00:15:21.173 "message": "Unable to find target foobar" 00:15:21.173 }' 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:21.173 { 00:15:21.173 "nqn": "nqn.2016-06.io.spdk:cnode25128", 00:15:21.173 "tgt_name": "foobar", 00:15:21.173 "method": "nvmf_create_subsystem", 00:15:21.173 "req_id": 1 00:15:21.173 } 00:15:21.173 Got JSON-RPC error response 00:15:21.173 response: 00:15:21.173 { 00:15:21.173 "code": -32603, 00:15:21.173 "message": "Unable to find target foobar" 00:15:21.173 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:21.173 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22081 00:15:21.434 [2024-10-07 09:36:20.947376] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22081: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:21.434 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:21.434 { 00:15:21.434 "nqn": "nqn.2016-06.io.spdk:cnode22081", 00:15:21.434 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:21.434 "method": "nvmf_create_subsystem", 00:15:21.434 "req_id": 1 00:15:21.434 } 00:15:21.434 Got JSON-RPC error response 00:15:21.434 response: 00:15:21.434 { 00:15:21.434 "code": -32602, 00:15:21.434 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:21.434 }' 00:15:21.434 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:21.434 { 00:15:21.434 "nqn": "nqn.2016-06.io.spdk:cnode22081", 00:15:21.434 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:21.435 "method": "nvmf_create_subsystem", 00:15:21.435 "req_id": 1 00:15:21.435 } 00:15:21.435 Got JSON-RPC error response 00:15:21.435 response: 00:15:21.435 { 00:15:21.435 "code": -32602, 00:15:21.435 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:21.435 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:21.435 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:21.435 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18915 00:15:21.697 [2024-10-07 09:36:21.152148] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18915: invalid model number 'SPDK_Controller' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:21.697 { 00:15:21.697 "nqn": "nqn.2016-06.io.spdk:cnode18915", 00:15:21.697 "model_number": "SPDK_Controller\u001f", 00:15:21.697 "method": "nvmf_create_subsystem", 00:15:21.697 "req_id": 1 00:15:21.697 } 00:15:21.697 Got JSON-RPC error response 00:15:21.697 response: 00:15:21.697 { 00:15:21.697 "code": -32602, 00:15:21.697 "message": "Invalid MN SPDK_Controller\u001f" 00:15:21.697 }' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:21.697 { 00:15:21.697 "nqn": "nqn.2016-06.io.spdk:cnode18915", 00:15:21.697 "model_number": "SPDK_Controller\u001f", 00:15:21.697 "method": "nvmf_create_subsystem", 00:15:21.697 "req_id": 1 00:15:21.697 } 00:15:21.697 Got JSON-RPC error response 00:15:21.697 response: 00:15:21.697 { 00:15:21.697 "code": -32602, 00:15:21.697 "message": "Invalid MN SPDK_Controller\u001f" 00:15:21.697 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:21.697 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:21.698 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'a\fue4!Gv@E|J#%HBm2N' 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'a\fue4!Gv@E|J#%HBm2N' nqn.2016-06.io.spdk:cnode318 00:15:21.961 [2024-10-07 09:36:21.533687] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode318: invalid serial number 'a\fue4!Gv@E|J#%HBm2N' 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:21.961 { 00:15:21.961 "nqn": "nqn.2016-06.io.spdk:cnode318", 00:15:21.961 "serial_number": "a\\fue\u007f4!Gv@E|J#%HBm2N", 00:15:21.961 "method": "nvmf_create_subsystem", 00:15:21.961 "req_id": 1 00:15:21.961 } 00:15:21.961 Got JSON-RPC error response 00:15:21.961 response: 00:15:21.961 { 00:15:21.961 "code": -32602, 00:15:21.961 "message": "Invalid SN a\\fue\u007f4!Gv@E|J#%HBm2N" 00:15:21.961 }' 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:21.961 { 00:15:21.961 "nqn": "nqn.2016-06.io.spdk:cnode318", 00:15:21.961 "serial_number": "a\\fue\u007f4!Gv@E|J#%HBm2N", 00:15:21.961 "method": "nvmf_create_subsystem", 00:15:21.961 "req_id": 1 00:15:21.961 } 00:15:21.961 Got JSON-RPC error response 00:15:21.961 response: 00:15:21.961 { 00:15:21.961 "code": -32602, 00:15:21.961 "message": "Invalid SN a\\fue\u007f4!Gv@E|J#%HBm2N" 00:15:21.961 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:21.961 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:21.962 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:21.962 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.962 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.962 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:21.962 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:21.962 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:21.962 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.962 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.227 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.228 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.496 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ctP;M"H97TI,Zk&nqw8hZkP]qAC^Ty:Oh&mNdqLRJ' 00:15:22.497 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'ctP;M"H97TI,Zk&nqw8hZkP]qAC^Ty:Oh&mNdqLRJ' nqn.2016-06.io.spdk:cnode29121 00:15:22.497 [2024-10-07 09:36:22.087826] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29121: invalid model number 'ctP;M"H97TI,Zk&nqw8hZkP]qAC^Ty:Oh&mNdqLRJ' 00:15:22.497 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:22.497 { 00:15:22.497 "nqn": "nqn.2016-06.io.spdk:cnode29121", 00:15:22.497 "model_number": "ctP;M\"H97TI,Zk&nqw8hZkP]qAC^Ty:Oh&mNdqLRJ", 00:15:22.497 "method": "nvmf_create_subsystem", 00:15:22.497 "req_id": 1 00:15:22.497 } 00:15:22.497 Got JSON-RPC error response 00:15:22.497 response: 00:15:22.497 { 00:15:22.497 "code": -32602, 00:15:22.497 "message": "Invalid MN ctP;M\"H97TI,Zk&nqw8hZkP]qAC^Ty:Oh&mNdqLRJ" 00:15:22.497 }' 00:15:22.497 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:22.497 { 00:15:22.497 "nqn": "nqn.2016-06.io.spdk:cnode29121", 00:15:22.497 "model_number": "ctP;M\"H97TI,Zk&nqw8hZkP]qAC^Ty:Oh&mNdqLRJ", 00:15:22.497 "method": "nvmf_create_subsystem", 00:15:22.497 "req_id": 1 00:15:22.497 } 00:15:22.497 Got JSON-RPC error response 00:15:22.497 response: 00:15:22.497 { 00:15:22.497 "code": -32602, 00:15:22.497 "message": "Invalid MN ctP;M\"H97TI,Zk&nqw8hZkP]qAC^Ty:Oh&mNdqLRJ" 00:15:22.497 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:22.497 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:22.758 [2024-10-07 09:36:22.292630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.758 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:23.019 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:23.019 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:23.019 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:23.019 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:23.019 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:23.019 [2024-10-07 09:36:22.677855] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:23.280 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:23.280 { 00:15:23.280 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:23.280 "listen_address": { 00:15:23.280 "trtype": "tcp", 00:15:23.280 "traddr": "", 00:15:23.280 "trsvcid": "4421" 00:15:23.280 }, 00:15:23.280 "method": "nvmf_subsystem_remove_listener", 00:15:23.280 "req_id": 1 00:15:23.280 } 00:15:23.280 Got JSON-RPC error response 00:15:23.280 response: 00:15:23.280 { 00:15:23.280 "code": -32602, 00:15:23.280 "message": "Invalid parameters" 00:15:23.280 }' 00:15:23.280 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:23.280 { 00:15:23.280 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:23.280 "listen_address": { 00:15:23.280 "trtype": "tcp", 00:15:23.280 "traddr": "", 00:15:23.280 "trsvcid": "4421" 00:15:23.280 }, 00:15:23.280 "method": "nvmf_subsystem_remove_listener", 00:15:23.280 "req_id": 1 00:15:23.280 } 00:15:23.280 Got JSON-RPC error response 00:15:23.280 response: 00:15:23.280 { 00:15:23.280 "code": -32602, 00:15:23.280 "message": "Invalid parameters" 00:15:23.280 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:23.280 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6132 -i 0 00:15:23.280 [2024-10-07 09:36:22.862381] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6132: invalid cntlid range [0-65519] 00:15:23.280 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:23.280 { 00:15:23.280 "nqn": "nqn.2016-06.io.spdk:cnode6132", 00:15:23.280 "min_cntlid": 0, 00:15:23.280 "method": "nvmf_create_subsystem", 00:15:23.280 "req_id": 1 00:15:23.280 } 00:15:23.280 Got JSON-RPC error response 00:15:23.280 response: 00:15:23.280 { 00:15:23.280 "code": -32602, 00:15:23.280 "message": "Invalid cntlid range [0-65519]" 00:15:23.280 }' 00:15:23.280 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:23.280 { 00:15:23.280 "nqn": "nqn.2016-06.io.spdk:cnode6132", 00:15:23.280 "min_cntlid": 0, 00:15:23.280 "method": "nvmf_create_subsystem", 00:15:23.280 "req_id": 1 00:15:23.280 } 00:15:23.280 Got JSON-RPC error response 00:15:23.280 response: 00:15:23.280 { 00:15:23.280 "code": -32602, 00:15:23.280 "message": "Invalid cntlid range [0-65519]" 00:15:23.280 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:23.280 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14315 -i 65520 00:15:23.542 [2024-10-07 09:36:23.046962] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14315: invalid cntlid range [65520-65519] 00:15:23.542 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:23.542 { 00:15:23.542 "nqn": "nqn.2016-06.io.spdk:cnode14315", 00:15:23.542 "min_cntlid": 65520, 00:15:23.542 "method": "nvmf_create_subsystem", 00:15:23.542 "req_id": 1 00:15:23.542 } 00:15:23.542 Got JSON-RPC error response 00:15:23.542 response: 00:15:23.542 { 00:15:23.542 "code": -32602, 00:15:23.542 "message": "Invalid cntlid range [65520-65519]" 00:15:23.542 }' 00:15:23.542 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:23.542 { 00:15:23.542 "nqn": "nqn.2016-06.io.spdk:cnode14315", 00:15:23.542 "min_cntlid": 65520, 00:15:23.542 "method": "nvmf_create_subsystem", 00:15:23.542 "req_id": 1 00:15:23.542 } 00:15:23.542 Got JSON-RPC error response 00:15:23.542 response: 00:15:23.542 { 00:15:23.542 "code": -32602, 00:15:23.542 "message": "Invalid cntlid range [65520-65519]" 00:15:23.542 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:23.542 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18300 -I 0 00:15:23.802 [2024-10-07 09:36:23.231527] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18300: invalid cntlid range [1-0] 00:15:23.802 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:23.802 { 00:15:23.802 "nqn": "nqn.2016-06.io.spdk:cnode18300", 00:15:23.802 "max_cntlid": 0, 00:15:23.802 "method": "nvmf_create_subsystem", 00:15:23.802 "req_id": 1 00:15:23.802 } 00:15:23.802 Got JSON-RPC error response 00:15:23.802 response: 00:15:23.802 { 00:15:23.802 "code": -32602, 00:15:23.802 "message": "Invalid cntlid range [1-0]" 00:15:23.802 }' 00:15:23.802 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:23.802 { 00:15:23.802 "nqn": "nqn.2016-06.io.spdk:cnode18300", 00:15:23.802 "max_cntlid": 0, 00:15:23.802 "method": "nvmf_create_subsystem", 00:15:23.802 "req_id": 1 00:15:23.802 } 00:15:23.802 Got JSON-RPC error response 00:15:23.802 response: 00:15:23.802 { 00:15:23.802 "code": -32602, 00:15:23.802 "message": "Invalid cntlid range [1-0]" 00:15:23.802 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:23.802 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26496 -I 65520 00:15:23.802 [2024-10-07 09:36:23.420131] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26496: invalid cntlid range [1-65520] 00:15:23.802 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:23.802 { 00:15:23.802 "nqn": "nqn.2016-06.io.spdk:cnode26496", 00:15:23.802 "max_cntlid": 65520, 00:15:23.802 "method": "nvmf_create_subsystem", 00:15:23.802 "req_id": 1 00:15:23.802 } 00:15:23.802 Got JSON-RPC error response 00:15:23.802 response: 00:15:23.802 { 00:15:23.802 "code": -32602, 00:15:23.802 "message": "Invalid cntlid range [1-65520]" 00:15:23.802 }' 00:15:23.802 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:23.802 { 00:15:23.802 "nqn": "nqn.2016-06.io.spdk:cnode26496", 00:15:23.802 "max_cntlid": 65520, 00:15:23.802 "method": "nvmf_create_subsystem", 00:15:23.802 "req_id": 1 00:15:23.803 } 00:15:23.803 Got JSON-RPC error response 00:15:23.803 response: 00:15:23.803 { 00:15:23.803 "code": -32602, 00:15:23.803 "message": "Invalid cntlid range [1-65520]" 00:15:23.803 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:23.803 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3253 -i 6 -I 5 00:15:24.063 [2024-10-07 09:36:23.612753] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3253: invalid cntlid range [6-5] 00:15:24.063 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:24.063 { 00:15:24.063 "nqn": "nqn.2016-06.io.spdk:cnode3253", 00:15:24.063 "min_cntlid": 6, 00:15:24.063 "max_cntlid": 5, 00:15:24.063 "method": "nvmf_create_subsystem", 00:15:24.063 "req_id": 1 00:15:24.063 } 00:15:24.063 Got JSON-RPC error response 00:15:24.063 response: 00:15:24.063 { 00:15:24.063 "code": -32602, 00:15:24.063 "message": "Invalid cntlid range [6-5]" 00:15:24.063 }' 00:15:24.063 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:24.063 { 00:15:24.063 "nqn": "nqn.2016-06.io.spdk:cnode3253", 00:15:24.063 "min_cntlid": 6, 00:15:24.063 "max_cntlid": 5, 00:15:24.063 "method": "nvmf_create_subsystem", 00:15:24.063 "req_id": 1 00:15:24.063 } 00:15:24.063 Got JSON-RPC error response 00:15:24.063 response: 00:15:24.063 { 00:15:24.063 "code": -32602, 00:15:24.063 "message": "Invalid cntlid range [6-5]" 00:15:24.063 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:24.063 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:24.328 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:24.328 { 00:15:24.328 "name": "foobar", 00:15:24.328 "method": "nvmf_delete_target", 00:15:24.328 "req_id": 1 00:15:24.328 } 00:15:24.328 Got JSON-RPC error response 00:15:24.328 response: 00:15:24.328 { 00:15:24.328 "code": -32602, 00:15:24.328 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:24.328 }' 00:15:24.328 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:24.328 { 00:15:24.328 "name": "foobar", 00:15:24.328 "method": "nvmf_delete_target", 00:15:24.328 "req_id": 1 00:15:24.328 } 00:15:24.328 Got JSON-RPC error response 00:15:24.328 response: 00:15:24.328 { 00:15:24.328 "code": -32602, 00:15:24.328 "message": "The specified target doesn't exist, cannot delete it." 00:15:24.328 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:24.328 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.329 rmmod nvme_tcp 00:15:24.329 rmmod nvme_fabrics 00:15:24.329 rmmod nvme_keyring 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 3287679 ']' 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 3287679 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' -z 3287679 ']' 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # kill -0 3287679 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # uname 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3287679 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3287679' 00:15:24.329 killing process with pid 3287679 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # kill 3287679 00:15:24.329 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@977 -- # wait 3287679 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.594 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.504 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:26.504 00:15:26.504 real 0m14.526s 00:15:26.504 user 0m21.113s 00:15:26.504 sys 0m7.062s 00:15:26.504 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:26.504 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:26.504 ************************************ 00:15:26.504 END TEST nvmf_invalid 00:15:26.504 ************************************ 00:15:26.504 09:36:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:26.504 09:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:15:26.504 09:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:26.504 09:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.765 ************************************ 00:15:26.765 START TEST nvmf_connect_stress 00:15:26.765 ************************************ 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:26.765 * Looking for test storage... 00:15:26.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1626 -- # lcov --version 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:15:26.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.765 --rc genhtml_branch_coverage=1 00:15:26.765 --rc genhtml_function_coverage=1 00:15:26.765 --rc genhtml_legend=1 00:15:26.765 --rc geninfo_all_blocks=1 00:15:26.765 --rc geninfo_unexecuted_blocks=1 00:15:26.765 00:15:26.765 ' 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:15:26.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.765 --rc genhtml_branch_coverage=1 00:15:26.765 --rc genhtml_function_coverage=1 00:15:26.765 --rc genhtml_legend=1 00:15:26.765 --rc geninfo_all_blocks=1 00:15:26.765 --rc geninfo_unexecuted_blocks=1 00:15:26.765 00:15:26.765 ' 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:15:26.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.765 --rc genhtml_branch_coverage=1 00:15:26.765 --rc genhtml_function_coverage=1 00:15:26.765 --rc genhtml_legend=1 00:15:26.765 --rc geninfo_all_blocks=1 00:15:26.765 --rc geninfo_unexecuted_blocks=1 00:15:26.765 00:15:26.765 ' 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:15:26.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.765 --rc genhtml_branch_coverage=1 00:15:26.765 --rc genhtml_function_coverage=1 00:15:26.765 --rc genhtml_legend=1 00:15:26.765 --rc geninfo_all_blocks=1 00:15:26.765 --rc geninfo_unexecuted_blocks=1 00:15:26.765 00:15:26.765 ' 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.765 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.026 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:27.026 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:27.026 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.026 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.026 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.026 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.026 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.026 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:27.027 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:35.175 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:35.175 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:35.175 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:35.176 Found net devices under 0000:31:00.0: cvl_0_0 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:35.176 Found net devices under 0000:31:00.1: cvl_0_1 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.176 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:35.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:15:35.176 00:15:35.176 --- 10.0.0.2 ping statistics --- 00:15:35.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.176 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:15:35.176 00:15:35.176 --- 10.0.0.1 ping statistics --- 00:15:35.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.176 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=3292935 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 3292935 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # '[' -z 3292935 ']' 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local max_retries=100 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@843 -- # xtrace_disable 00:15:35.176 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.176 [2024-10-07 09:36:34.266820] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:35.176 [2024-10-07 09:36:34.266886] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.176 [2024-10-07 09:36:34.359384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:35.176 [2024-10-07 09:36:34.453964] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.176 [2024-10-07 09:36:34.454032] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.176 [2024-10-07 09:36:34.454041] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.176 [2024-10-07 09:36:34.454048] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.176 [2024-10-07 09:36:34.454054] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.176 [2024-10-07 09:36:34.455454] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.176 [2024-10-07 09:36:34.455626] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.176 [2024-10-07 09:36:34.455634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@867 -- # return 0 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@733 -- # xtrace_disable 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.751 [2024-10-07 09:36:35.153939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.751 [2024-10-07 09:36:35.190287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.751 NULL1 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3293282 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.751 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:35.752 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.024 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:36.024 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:36.024 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.024 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:36.024 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.596 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:36.596 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:36.596 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.596 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:36.596 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.857 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:36.857 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:36.857 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.857 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:36.857 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.119 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:37.119 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:37.119 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.119 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:37.119 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.468 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:37.468 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:37.468 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.468 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:37.468 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.787 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:37.787 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:37.787 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.787 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:37.787 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.116 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:38.116 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:38.116 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.116 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:38.116 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.398 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:38.399 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:38.399 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.399 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:38.399 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.659 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:38.659 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:38.659 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.659 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:38.659 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.926 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:38.926 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:38.926 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.926 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:38.926 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.508 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:39.508 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:39.508 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.508 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:39.508 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.768 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:39.768 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:39.768 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.768 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:39.768 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.028 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:40.028 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:40.028 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.028 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:40.028 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.288 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:40.288 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:40.288 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.288 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:40.288 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.549 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:40.549 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:40.549 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.549 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:40.549 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.120 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:41.120 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:41.120 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.120 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:41.120 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.380 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:41.380 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:41.380 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.380 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:41.381 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.641 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:41.641 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:41.641 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.641 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:41.641 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.902 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:41.902 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:41.902 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.902 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:41.902 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.162 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:42.162 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:42.162 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.423 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:42.423 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.684 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:42.684 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:42.684 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.684 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:42.684 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.945 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:42.945 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:42.945 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.945 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:42.945 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.204 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:43.204 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:43.204 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.204 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:43.204 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.830 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:43.830 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:43.830 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.830 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:43.830 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.830 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:43.830 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:43.830 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.830 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:43.830 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.401 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:44.401 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:44.401 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.401 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:44.401 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.661 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:44.661 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:44.661 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.661 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:44.661 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.921 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:44.921 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:44.921 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.921 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:44.921 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.181 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:45.181 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:45.181 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.181 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:45.181 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.442 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:45.442 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:45.442 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.442 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:45.442 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.703 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3293282 00:15:45.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3293282) - No such process 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3293282 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:45.964 rmmod nvme_tcp 00:15:45.964 rmmod nvme_fabrics 00:15:45.964 rmmod nvme_keyring 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 3292935 ']' 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 3292935 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' -z 3292935 ']' 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # kill -0 3292935 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # uname 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3292935 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3292935' 00:15:45.964 killing process with pid 3292935 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # kill 3292935 00:15:45.964 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@977 -- # wait 3292935 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.226 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.140 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:48.140 00:15:48.140 real 0m21.574s 00:15:48.140 user 0m42.275s 00:15:48.140 sys 0m9.603s 00:15:48.140 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:48.140 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.140 ************************************ 00:15:48.140 END TEST nvmf_connect_stress 00:15:48.140 ************************************ 00:15:48.140 09:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:48.140 09:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:15:48.140 09:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:48.140 09:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.401 ************************************ 00:15:48.401 START TEST nvmf_fused_ordering 00:15:48.401 ************************************ 00:15:48.401 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:48.401 * Looking for test storage... 00:15:48.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.401 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:15:48.401 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1626 -- # lcov --version 00:15:48.401 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:15:48.401 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:15:48.401 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.401 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.401 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.401 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.401 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.401 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.401 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.401 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:48.402 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:15:48.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.664 --rc genhtml_branch_coverage=1 00:15:48.664 --rc genhtml_function_coverage=1 00:15:48.664 --rc genhtml_legend=1 00:15:48.664 --rc geninfo_all_blocks=1 00:15:48.664 --rc geninfo_unexecuted_blocks=1 00:15:48.664 00:15:48.664 ' 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:15:48.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.664 --rc genhtml_branch_coverage=1 00:15:48.664 --rc genhtml_function_coverage=1 00:15:48.664 --rc genhtml_legend=1 00:15:48.664 --rc geninfo_all_blocks=1 00:15:48.664 --rc geninfo_unexecuted_blocks=1 00:15:48.664 00:15:48.664 ' 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:15:48.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.664 --rc genhtml_branch_coverage=1 00:15:48.664 --rc genhtml_function_coverage=1 00:15:48.664 --rc genhtml_legend=1 00:15:48.664 --rc geninfo_all_blocks=1 00:15:48.664 --rc geninfo_unexecuted_blocks=1 00:15:48.664 00:15:48.664 ' 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:15:48.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.664 --rc genhtml_branch_coverage=1 00:15:48.664 --rc genhtml_function_coverage=1 00:15:48.664 --rc genhtml_legend=1 00:15:48.664 --rc geninfo_all_blocks=1 00:15:48.664 --rc geninfo_unexecuted_blocks=1 00:15:48.664 00:15:48.664 ' 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.664 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:48.665 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:56.846 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:56.846 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:56.846 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:56.847 Found net devices under 0000:31:00.0: cvl_0_0 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:56.847 Found net devices under 0000:31:00.1: cvl_0_1 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:56.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:15:56.847 00:15:56.847 --- 10.0.0.2 ping statistics --- 00:15:56.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.847 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:15:56.847 00:15:56.847 --- 10.0.0.1 ping statistics --- 00:15:56.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.847 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=3299622 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 3299622 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # '[' -z 3299622 ']' 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local max_retries=100 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@843 -- # xtrace_disable 00:15:56.847 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.847 [2024-10-07 09:36:55.841042] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:56.847 [2024-10-07 09:36:55.841107] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.847 [2024-10-07 09:36:55.932862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.847 [2024-10-07 09:36:56.027555] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.847 [2024-10-07 09:36:56.027613] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.847 [2024-10-07 09:36:56.027635] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.847 [2024-10-07 09:36:56.027643] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.847 [2024-10-07 09:36:56.027649] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.847 [2024-10-07 09:36:56.028376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.109 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:15:57.109 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@867 -- # return 0 00:15:57.109 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:57.109 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@733 -- # xtrace_disable 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.110 [2024-10-07 09:36:56.725184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.110 [2024-10-07 09:36:56.749497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.110 NULL1 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:57.110 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.372 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:57.372 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:57.372 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:57.372 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.372 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:57.372 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:57.372 [2024-10-07 09:36:56.819809] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:57.372 [2024-10-07 09:36:56.819874] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3299748 ] 00:15:57.633 Attached to nqn.2016-06.io.spdk:cnode1 00:15:57.633 Namespace ID: 1 size: 1GB 00:15:57.633 fused_ordering(0) 00:15:57.633 fused_ordering(1) 00:15:57.633 fused_ordering(2) 00:15:57.633 fused_ordering(3) 00:15:57.633 fused_ordering(4) 00:15:57.633 fused_ordering(5) 00:15:57.633 fused_ordering(6) 00:15:57.633 fused_ordering(7) 00:15:57.633 fused_ordering(8) 00:15:57.633 fused_ordering(9) 00:15:57.633 fused_ordering(10) 00:15:57.633 fused_ordering(11) 00:15:57.633 fused_ordering(12) 00:15:57.633 fused_ordering(13) 00:15:57.633 fused_ordering(14) 00:15:57.633 fused_ordering(15) 00:15:57.633 fused_ordering(16) 00:15:57.633 fused_ordering(17) 00:15:57.633 fused_ordering(18) 00:15:57.633 fused_ordering(19) 00:15:57.633 fused_ordering(20) 00:15:57.633 fused_ordering(21) 00:15:57.633 fused_ordering(22) 00:15:57.633 fused_ordering(23) 00:15:57.633 fused_ordering(24) 00:15:57.633 fused_ordering(25) 00:15:57.633 fused_ordering(26) 00:15:57.633 fused_ordering(27) 00:15:57.633 fused_ordering(28) 00:15:57.633 fused_ordering(29) 00:15:57.633 fused_ordering(30) 00:15:57.633 fused_ordering(31) 00:15:57.633 fused_ordering(32) 00:15:57.633 fused_ordering(33) 00:15:57.633 fused_ordering(34) 00:15:57.633 fused_ordering(35) 00:15:57.633 fused_ordering(36) 00:15:57.633 fused_ordering(37) 00:15:57.633 fused_ordering(38) 00:15:57.633 fused_ordering(39) 00:15:57.633 fused_ordering(40) 00:15:57.633 fused_ordering(41) 00:15:57.633 fused_ordering(42) 00:15:57.633 fused_ordering(43) 00:15:57.633 fused_ordering(44) 00:15:57.633 fused_ordering(45) 00:15:57.633 fused_ordering(46) 00:15:57.633 fused_ordering(47) 00:15:57.633 fused_ordering(48) 00:15:57.633 fused_ordering(49) 00:15:57.633 fused_ordering(50) 00:15:57.633 fused_ordering(51) 00:15:57.633 fused_ordering(52) 00:15:57.634 fused_ordering(53) 00:15:57.634 fused_ordering(54) 00:15:57.634 fused_ordering(55) 00:15:57.634 fused_ordering(56) 00:15:57.634 fused_ordering(57) 00:15:57.634 fused_ordering(58) 00:15:57.634 fused_ordering(59) 00:15:57.634 fused_ordering(60) 00:15:57.634 fused_ordering(61) 00:15:57.634 fused_ordering(62) 00:15:57.634 fused_ordering(63) 00:15:57.634 fused_ordering(64) 00:15:57.634 fused_ordering(65) 00:15:57.634 fused_ordering(66) 00:15:57.634 fused_ordering(67) 00:15:57.634 fused_ordering(68) 00:15:57.634 fused_ordering(69) 00:15:57.634 fused_ordering(70) 00:15:57.634 fused_ordering(71) 00:15:57.634 fused_ordering(72) 00:15:57.634 fused_ordering(73) 00:15:57.634 fused_ordering(74) 00:15:57.634 fused_ordering(75) 00:15:57.634 fused_ordering(76) 00:15:57.634 fused_ordering(77) 00:15:57.634 fused_ordering(78) 00:15:57.634 fused_ordering(79) 00:15:57.634 fused_ordering(80) 00:15:57.634 fused_ordering(81) 00:15:57.634 fused_ordering(82) 00:15:57.634 fused_ordering(83) 00:15:57.634 fused_ordering(84) 00:15:57.634 fused_ordering(85) 00:15:57.634 fused_ordering(86) 00:15:57.634 fused_ordering(87) 00:15:57.634 fused_ordering(88) 00:15:57.634 fused_ordering(89) 00:15:57.634 fused_ordering(90) 00:15:57.634 fused_ordering(91) 00:15:57.634 fused_ordering(92) 00:15:57.634 fused_ordering(93) 00:15:57.634 fused_ordering(94) 00:15:57.634 fused_ordering(95) 00:15:57.634 fused_ordering(96) 00:15:57.634 fused_ordering(97) 00:15:57.634 fused_ordering(98) 00:15:57.634 fused_ordering(99) 00:15:57.634 fused_ordering(100) 00:15:57.634 fused_ordering(101) 00:15:57.634 fused_ordering(102) 00:15:57.634 fused_ordering(103) 00:15:57.634 fused_ordering(104) 00:15:57.634 fused_ordering(105) 00:15:57.634 fused_ordering(106) 00:15:57.634 fused_ordering(107) 00:15:57.634 fused_ordering(108) 00:15:57.634 fused_ordering(109) 00:15:57.634 fused_ordering(110) 00:15:57.634 fused_ordering(111) 00:15:57.634 fused_ordering(112) 00:15:57.634 fused_ordering(113) 00:15:57.634 fused_ordering(114) 00:15:57.634 fused_ordering(115) 00:15:57.634 fused_ordering(116) 00:15:57.634 fused_ordering(117) 00:15:57.634 fused_ordering(118) 00:15:57.634 fused_ordering(119) 00:15:57.634 fused_ordering(120) 00:15:57.634 fused_ordering(121) 00:15:57.634 fused_ordering(122) 00:15:57.634 fused_ordering(123) 00:15:57.634 fused_ordering(124) 00:15:57.634 fused_ordering(125) 00:15:57.634 fused_ordering(126) 00:15:57.634 fused_ordering(127) 00:15:57.634 fused_ordering(128) 00:15:57.634 fused_ordering(129) 00:15:57.634 fused_ordering(130) 00:15:57.634 fused_ordering(131) 00:15:57.634 fused_ordering(132) 00:15:57.634 fused_ordering(133) 00:15:57.634 fused_ordering(134) 00:15:57.634 fused_ordering(135) 00:15:57.634 fused_ordering(136) 00:15:57.634 fused_ordering(137) 00:15:57.634 fused_ordering(138) 00:15:57.634 fused_ordering(139) 00:15:57.634 fused_ordering(140) 00:15:57.634 fused_ordering(141) 00:15:57.634 fused_ordering(142) 00:15:57.634 fused_ordering(143) 00:15:57.634 fused_ordering(144) 00:15:57.634 fused_ordering(145) 00:15:57.634 fused_ordering(146) 00:15:57.634 fused_ordering(147) 00:15:57.634 fused_ordering(148) 00:15:57.634 fused_ordering(149) 00:15:57.634 fused_ordering(150) 00:15:57.634 fused_ordering(151) 00:15:57.634 fused_ordering(152) 00:15:57.634 fused_ordering(153) 00:15:57.634 fused_ordering(154) 00:15:57.634 fused_ordering(155) 00:15:57.634 fused_ordering(156) 00:15:57.634 fused_ordering(157) 00:15:57.634 fused_ordering(158) 00:15:57.634 fused_ordering(159) 00:15:57.634 fused_ordering(160) 00:15:57.634 fused_ordering(161) 00:15:57.634 fused_ordering(162) 00:15:57.634 fused_ordering(163) 00:15:57.634 fused_ordering(164) 00:15:57.634 fused_ordering(165) 00:15:57.634 fused_ordering(166) 00:15:57.634 fused_ordering(167) 00:15:57.634 fused_ordering(168) 00:15:57.634 fused_ordering(169) 00:15:57.634 fused_ordering(170) 00:15:57.634 fused_ordering(171) 00:15:57.634 fused_ordering(172) 00:15:57.634 fused_ordering(173) 00:15:57.634 fused_ordering(174) 00:15:57.634 fused_ordering(175) 00:15:57.634 fused_ordering(176) 00:15:57.634 fused_ordering(177) 00:15:57.634 fused_ordering(178) 00:15:57.634 fused_ordering(179) 00:15:57.634 fused_ordering(180) 00:15:57.634 fused_ordering(181) 00:15:57.634 fused_ordering(182) 00:15:57.634 fused_ordering(183) 00:15:57.634 fused_ordering(184) 00:15:57.634 fused_ordering(185) 00:15:57.634 fused_ordering(186) 00:15:57.634 fused_ordering(187) 00:15:57.634 fused_ordering(188) 00:15:57.634 fused_ordering(189) 00:15:57.634 fused_ordering(190) 00:15:57.634 fused_ordering(191) 00:15:57.634 fused_ordering(192) 00:15:57.634 fused_ordering(193) 00:15:57.634 fused_ordering(194) 00:15:57.634 fused_ordering(195) 00:15:57.634 fused_ordering(196) 00:15:57.634 fused_ordering(197) 00:15:57.634 fused_ordering(198) 00:15:57.634 fused_ordering(199) 00:15:57.634 fused_ordering(200) 00:15:57.634 fused_ordering(201) 00:15:57.634 fused_ordering(202) 00:15:57.634 fused_ordering(203) 00:15:57.634 fused_ordering(204) 00:15:57.634 fused_ordering(205) 00:15:58.207 fused_ordering(206) 00:15:58.207 fused_ordering(207) 00:15:58.207 fused_ordering(208) 00:15:58.207 fused_ordering(209) 00:15:58.207 fused_ordering(210) 00:15:58.207 fused_ordering(211) 00:15:58.207 fused_ordering(212) 00:15:58.208 fused_ordering(213) 00:15:58.208 fused_ordering(214) 00:15:58.208 fused_ordering(215) 00:15:58.208 fused_ordering(216) 00:15:58.208 fused_ordering(217) 00:15:58.208 fused_ordering(218) 00:15:58.208 fused_ordering(219) 00:15:58.208 fused_ordering(220) 00:15:58.208 fused_ordering(221) 00:15:58.208 fused_ordering(222) 00:15:58.208 fused_ordering(223) 00:15:58.208 fused_ordering(224) 00:15:58.208 fused_ordering(225) 00:15:58.208 fused_ordering(226) 00:15:58.208 fused_ordering(227) 00:15:58.208 fused_ordering(228) 00:15:58.208 fused_ordering(229) 00:15:58.208 fused_ordering(230) 00:15:58.208 fused_ordering(231) 00:15:58.208 fused_ordering(232) 00:15:58.208 fused_ordering(233) 00:15:58.208 fused_ordering(234) 00:15:58.208 fused_ordering(235) 00:15:58.208 fused_ordering(236) 00:15:58.208 fused_ordering(237) 00:15:58.208 fused_ordering(238) 00:15:58.208 fused_ordering(239) 00:15:58.208 fused_ordering(240) 00:15:58.208 fused_ordering(241) 00:15:58.208 fused_ordering(242) 00:15:58.208 fused_ordering(243) 00:15:58.208 fused_ordering(244) 00:15:58.208 fused_ordering(245) 00:15:58.208 fused_ordering(246) 00:15:58.208 fused_ordering(247) 00:15:58.208 fused_ordering(248) 00:15:58.208 fused_ordering(249) 00:15:58.208 fused_ordering(250) 00:15:58.208 fused_ordering(251) 00:15:58.208 fused_ordering(252) 00:15:58.208 fused_ordering(253) 00:15:58.208 fused_ordering(254) 00:15:58.208 fused_ordering(255) 00:15:58.208 fused_ordering(256) 00:15:58.208 fused_ordering(257) 00:15:58.208 fused_ordering(258) 00:15:58.208 fused_ordering(259) 00:15:58.208 fused_ordering(260) 00:15:58.208 fused_ordering(261) 00:15:58.208 fused_ordering(262) 00:15:58.208 fused_ordering(263) 00:15:58.208 fused_ordering(264) 00:15:58.208 fused_ordering(265) 00:15:58.208 fused_ordering(266) 00:15:58.208 fused_ordering(267) 00:15:58.208 fused_ordering(268) 00:15:58.208 fused_ordering(269) 00:15:58.208 fused_ordering(270) 00:15:58.208 fused_ordering(271) 00:15:58.208 fused_ordering(272) 00:15:58.208 fused_ordering(273) 00:15:58.208 fused_ordering(274) 00:15:58.208 fused_ordering(275) 00:15:58.208 fused_ordering(276) 00:15:58.208 fused_ordering(277) 00:15:58.208 fused_ordering(278) 00:15:58.208 fused_ordering(279) 00:15:58.208 fused_ordering(280) 00:15:58.208 fused_ordering(281) 00:15:58.208 fused_ordering(282) 00:15:58.208 fused_ordering(283) 00:15:58.208 fused_ordering(284) 00:15:58.208 fused_ordering(285) 00:15:58.208 fused_ordering(286) 00:15:58.208 fused_ordering(287) 00:15:58.208 fused_ordering(288) 00:15:58.208 fused_ordering(289) 00:15:58.208 fused_ordering(290) 00:15:58.208 fused_ordering(291) 00:15:58.208 fused_ordering(292) 00:15:58.208 fused_ordering(293) 00:15:58.208 fused_ordering(294) 00:15:58.208 fused_ordering(295) 00:15:58.208 fused_ordering(296) 00:15:58.208 fused_ordering(297) 00:15:58.208 fused_ordering(298) 00:15:58.208 fused_ordering(299) 00:15:58.208 fused_ordering(300) 00:15:58.208 fused_ordering(301) 00:15:58.208 fused_ordering(302) 00:15:58.208 fused_ordering(303) 00:15:58.208 fused_ordering(304) 00:15:58.208 fused_ordering(305) 00:15:58.208 fused_ordering(306) 00:15:58.208 fused_ordering(307) 00:15:58.208 fused_ordering(308) 00:15:58.208 fused_ordering(309) 00:15:58.208 fused_ordering(310) 00:15:58.208 fused_ordering(311) 00:15:58.208 fused_ordering(312) 00:15:58.208 fused_ordering(313) 00:15:58.208 fused_ordering(314) 00:15:58.208 fused_ordering(315) 00:15:58.208 fused_ordering(316) 00:15:58.208 fused_ordering(317) 00:15:58.208 fused_ordering(318) 00:15:58.208 fused_ordering(319) 00:15:58.208 fused_ordering(320) 00:15:58.208 fused_ordering(321) 00:15:58.208 fused_ordering(322) 00:15:58.208 fused_ordering(323) 00:15:58.208 fused_ordering(324) 00:15:58.208 fused_ordering(325) 00:15:58.208 fused_ordering(326) 00:15:58.208 fused_ordering(327) 00:15:58.208 fused_ordering(328) 00:15:58.208 fused_ordering(329) 00:15:58.208 fused_ordering(330) 00:15:58.208 fused_ordering(331) 00:15:58.208 fused_ordering(332) 00:15:58.208 fused_ordering(333) 00:15:58.208 fused_ordering(334) 00:15:58.208 fused_ordering(335) 00:15:58.208 fused_ordering(336) 00:15:58.208 fused_ordering(337) 00:15:58.208 fused_ordering(338) 00:15:58.208 fused_ordering(339) 00:15:58.208 fused_ordering(340) 00:15:58.208 fused_ordering(341) 00:15:58.208 fused_ordering(342) 00:15:58.208 fused_ordering(343) 00:15:58.208 fused_ordering(344) 00:15:58.208 fused_ordering(345) 00:15:58.208 fused_ordering(346) 00:15:58.208 fused_ordering(347) 00:15:58.208 fused_ordering(348) 00:15:58.208 fused_ordering(349) 00:15:58.208 fused_ordering(350) 00:15:58.208 fused_ordering(351) 00:15:58.208 fused_ordering(352) 00:15:58.208 fused_ordering(353) 00:15:58.208 fused_ordering(354) 00:15:58.208 fused_ordering(355) 00:15:58.208 fused_ordering(356) 00:15:58.208 fused_ordering(357) 00:15:58.208 fused_ordering(358) 00:15:58.208 fused_ordering(359) 00:15:58.208 fused_ordering(360) 00:15:58.208 fused_ordering(361) 00:15:58.208 fused_ordering(362) 00:15:58.208 fused_ordering(363) 00:15:58.208 fused_ordering(364) 00:15:58.208 fused_ordering(365) 00:15:58.208 fused_ordering(366) 00:15:58.208 fused_ordering(367) 00:15:58.208 fused_ordering(368) 00:15:58.208 fused_ordering(369) 00:15:58.208 fused_ordering(370) 00:15:58.208 fused_ordering(371) 00:15:58.208 fused_ordering(372) 00:15:58.208 fused_ordering(373) 00:15:58.208 fused_ordering(374) 00:15:58.208 fused_ordering(375) 00:15:58.208 fused_ordering(376) 00:15:58.208 fused_ordering(377) 00:15:58.208 fused_ordering(378) 00:15:58.208 fused_ordering(379) 00:15:58.208 fused_ordering(380) 00:15:58.208 fused_ordering(381) 00:15:58.208 fused_ordering(382) 00:15:58.208 fused_ordering(383) 00:15:58.208 fused_ordering(384) 00:15:58.208 fused_ordering(385) 00:15:58.208 fused_ordering(386) 00:15:58.208 fused_ordering(387) 00:15:58.208 fused_ordering(388) 00:15:58.208 fused_ordering(389) 00:15:58.208 fused_ordering(390) 00:15:58.208 fused_ordering(391) 00:15:58.208 fused_ordering(392) 00:15:58.208 fused_ordering(393) 00:15:58.208 fused_ordering(394) 00:15:58.208 fused_ordering(395) 00:15:58.208 fused_ordering(396) 00:15:58.208 fused_ordering(397) 00:15:58.208 fused_ordering(398) 00:15:58.208 fused_ordering(399) 00:15:58.208 fused_ordering(400) 00:15:58.208 fused_ordering(401) 00:15:58.208 fused_ordering(402) 00:15:58.208 fused_ordering(403) 00:15:58.208 fused_ordering(404) 00:15:58.208 fused_ordering(405) 00:15:58.208 fused_ordering(406) 00:15:58.208 fused_ordering(407) 00:15:58.208 fused_ordering(408) 00:15:58.208 fused_ordering(409) 00:15:58.208 fused_ordering(410) 00:15:58.469 fused_ordering(411) 00:15:58.469 fused_ordering(412) 00:15:58.469 fused_ordering(413) 00:15:58.469 fused_ordering(414) 00:15:58.469 fused_ordering(415) 00:15:58.469 fused_ordering(416) 00:15:58.469 fused_ordering(417) 00:15:58.469 fused_ordering(418) 00:15:58.470 fused_ordering(419) 00:15:58.470 fused_ordering(420) 00:15:58.470 fused_ordering(421) 00:15:58.470 fused_ordering(422) 00:15:58.470 fused_ordering(423) 00:15:58.470 fused_ordering(424) 00:15:58.470 fused_ordering(425) 00:15:58.470 fused_ordering(426) 00:15:58.470 fused_ordering(427) 00:15:58.470 fused_ordering(428) 00:15:58.470 fused_ordering(429) 00:15:58.470 fused_ordering(430) 00:15:58.470 fused_ordering(431) 00:15:58.470 fused_ordering(432) 00:15:58.470 fused_ordering(433) 00:15:58.470 fused_ordering(434) 00:15:58.470 fused_ordering(435) 00:15:58.470 fused_ordering(436) 00:15:58.470 fused_ordering(437) 00:15:58.470 fused_ordering(438) 00:15:58.470 fused_ordering(439) 00:15:58.470 fused_ordering(440) 00:15:58.470 fused_ordering(441) 00:15:58.470 fused_ordering(442) 00:15:58.470 fused_ordering(443) 00:15:58.470 fused_ordering(444) 00:15:58.470 fused_ordering(445) 00:15:58.470 fused_ordering(446) 00:15:58.470 fused_ordering(447) 00:15:58.470 fused_ordering(448) 00:15:58.470 fused_ordering(449) 00:15:58.470 fused_ordering(450) 00:15:58.470 fused_ordering(451) 00:15:58.470 fused_ordering(452) 00:15:58.470 fused_ordering(453) 00:15:58.470 fused_ordering(454) 00:15:58.470 fused_ordering(455) 00:15:58.470 fused_ordering(456) 00:15:58.470 fused_ordering(457) 00:15:58.470 fused_ordering(458) 00:15:58.470 fused_ordering(459) 00:15:58.470 fused_ordering(460) 00:15:58.470 fused_ordering(461) 00:15:58.470 fused_ordering(462) 00:15:58.470 fused_ordering(463) 00:15:58.470 fused_ordering(464) 00:15:58.470 fused_ordering(465) 00:15:58.470 fused_ordering(466) 00:15:58.470 fused_ordering(467) 00:15:58.470 fused_ordering(468) 00:15:58.470 fused_ordering(469) 00:15:58.470 fused_ordering(470) 00:15:58.470 fused_ordering(471) 00:15:58.470 fused_ordering(472) 00:15:58.470 fused_ordering(473) 00:15:58.470 fused_ordering(474) 00:15:58.470 fused_ordering(475) 00:15:58.470 fused_ordering(476) 00:15:58.470 fused_ordering(477) 00:15:58.470 fused_ordering(478) 00:15:58.470 fused_ordering(479) 00:15:58.470 fused_ordering(480) 00:15:58.470 fused_ordering(481) 00:15:58.470 fused_ordering(482) 00:15:58.470 fused_ordering(483) 00:15:58.470 fused_ordering(484) 00:15:58.470 fused_ordering(485) 00:15:58.470 fused_ordering(486) 00:15:58.470 fused_ordering(487) 00:15:58.470 fused_ordering(488) 00:15:58.470 fused_ordering(489) 00:15:58.470 fused_ordering(490) 00:15:58.470 fused_ordering(491) 00:15:58.470 fused_ordering(492) 00:15:58.470 fused_ordering(493) 00:15:58.470 fused_ordering(494) 00:15:58.470 fused_ordering(495) 00:15:58.470 fused_ordering(496) 00:15:58.470 fused_ordering(497) 00:15:58.470 fused_ordering(498) 00:15:58.470 fused_ordering(499) 00:15:58.470 fused_ordering(500) 00:15:58.470 fused_ordering(501) 00:15:58.470 fused_ordering(502) 00:15:58.470 fused_ordering(503) 00:15:58.470 fused_ordering(504) 00:15:58.470 fused_ordering(505) 00:15:58.470 fused_ordering(506) 00:15:58.470 fused_ordering(507) 00:15:58.470 fused_ordering(508) 00:15:58.470 fused_ordering(509) 00:15:58.470 fused_ordering(510) 00:15:58.470 fused_ordering(511) 00:15:58.470 fused_ordering(512) 00:15:58.470 fused_ordering(513) 00:15:58.470 fused_ordering(514) 00:15:58.470 fused_ordering(515) 00:15:58.470 fused_ordering(516) 00:15:58.470 fused_ordering(517) 00:15:58.470 fused_ordering(518) 00:15:58.470 fused_ordering(519) 00:15:58.470 fused_ordering(520) 00:15:58.470 fused_ordering(521) 00:15:58.470 fused_ordering(522) 00:15:58.470 fused_ordering(523) 00:15:58.470 fused_ordering(524) 00:15:58.470 fused_ordering(525) 00:15:58.470 fused_ordering(526) 00:15:58.470 fused_ordering(527) 00:15:58.470 fused_ordering(528) 00:15:58.470 fused_ordering(529) 00:15:58.470 fused_ordering(530) 00:15:58.470 fused_ordering(531) 00:15:58.470 fused_ordering(532) 00:15:58.470 fused_ordering(533) 00:15:58.470 fused_ordering(534) 00:15:58.470 fused_ordering(535) 00:15:58.470 fused_ordering(536) 00:15:58.470 fused_ordering(537) 00:15:58.470 fused_ordering(538) 00:15:58.470 fused_ordering(539) 00:15:58.470 fused_ordering(540) 00:15:58.470 fused_ordering(541) 00:15:58.470 fused_ordering(542) 00:15:58.470 fused_ordering(543) 00:15:58.470 fused_ordering(544) 00:15:58.470 fused_ordering(545) 00:15:58.470 fused_ordering(546) 00:15:58.470 fused_ordering(547) 00:15:58.470 fused_ordering(548) 00:15:58.470 fused_ordering(549) 00:15:58.470 fused_ordering(550) 00:15:58.470 fused_ordering(551) 00:15:58.470 fused_ordering(552) 00:15:58.470 fused_ordering(553) 00:15:58.470 fused_ordering(554) 00:15:58.470 fused_ordering(555) 00:15:58.470 fused_ordering(556) 00:15:58.470 fused_ordering(557) 00:15:58.470 fused_ordering(558) 00:15:58.470 fused_ordering(559) 00:15:58.470 fused_ordering(560) 00:15:58.470 fused_ordering(561) 00:15:58.470 fused_ordering(562) 00:15:58.470 fused_ordering(563) 00:15:58.470 fused_ordering(564) 00:15:58.470 fused_ordering(565) 00:15:58.470 fused_ordering(566) 00:15:58.470 fused_ordering(567) 00:15:58.470 fused_ordering(568) 00:15:58.470 fused_ordering(569) 00:15:58.470 fused_ordering(570) 00:15:58.470 fused_ordering(571) 00:15:58.470 fused_ordering(572) 00:15:58.470 fused_ordering(573) 00:15:58.470 fused_ordering(574) 00:15:58.470 fused_ordering(575) 00:15:58.470 fused_ordering(576) 00:15:58.470 fused_ordering(577) 00:15:58.470 fused_ordering(578) 00:15:58.470 fused_ordering(579) 00:15:58.470 fused_ordering(580) 00:15:58.470 fused_ordering(581) 00:15:58.470 fused_ordering(582) 00:15:58.470 fused_ordering(583) 00:15:58.470 fused_ordering(584) 00:15:58.470 fused_ordering(585) 00:15:58.470 fused_ordering(586) 00:15:58.470 fused_ordering(587) 00:15:58.470 fused_ordering(588) 00:15:58.470 fused_ordering(589) 00:15:58.470 fused_ordering(590) 00:15:58.470 fused_ordering(591) 00:15:58.470 fused_ordering(592) 00:15:58.470 fused_ordering(593) 00:15:58.470 fused_ordering(594) 00:15:58.470 fused_ordering(595) 00:15:58.470 fused_ordering(596) 00:15:58.470 fused_ordering(597) 00:15:58.470 fused_ordering(598) 00:15:58.470 fused_ordering(599) 00:15:58.470 fused_ordering(600) 00:15:58.470 fused_ordering(601) 00:15:58.470 fused_ordering(602) 00:15:58.470 fused_ordering(603) 00:15:58.470 fused_ordering(604) 00:15:58.470 fused_ordering(605) 00:15:58.470 fused_ordering(606) 00:15:58.470 fused_ordering(607) 00:15:58.470 fused_ordering(608) 00:15:58.470 fused_ordering(609) 00:15:58.470 fused_ordering(610) 00:15:58.470 fused_ordering(611) 00:15:58.470 fused_ordering(612) 00:15:58.470 fused_ordering(613) 00:15:58.470 fused_ordering(614) 00:15:58.470 fused_ordering(615) 00:15:59.045 fused_ordering(616) 00:15:59.045 fused_ordering(617) 00:15:59.045 fused_ordering(618) 00:15:59.045 fused_ordering(619) 00:15:59.045 fused_ordering(620) 00:15:59.045 fused_ordering(621) 00:15:59.045 fused_ordering(622) 00:15:59.045 fused_ordering(623) 00:15:59.045 fused_ordering(624) 00:15:59.045 fused_ordering(625) 00:15:59.045 fused_ordering(626) 00:15:59.045 fused_ordering(627) 00:15:59.045 fused_ordering(628) 00:15:59.045 fused_ordering(629) 00:15:59.045 fused_ordering(630) 00:15:59.045 fused_ordering(631) 00:15:59.045 fused_ordering(632) 00:15:59.045 fused_ordering(633) 00:15:59.045 fused_ordering(634) 00:15:59.045 fused_ordering(635) 00:15:59.045 fused_ordering(636) 00:15:59.045 fused_ordering(637) 00:15:59.045 fused_ordering(638) 00:15:59.045 fused_ordering(639) 00:15:59.045 fused_ordering(640) 00:15:59.045 fused_ordering(641) 00:15:59.045 fused_ordering(642) 00:15:59.045 fused_ordering(643) 00:15:59.045 fused_ordering(644) 00:15:59.045 fused_ordering(645) 00:15:59.045 fused_ordering(646) 00:15:59.045 fused_ordering(647) 00:15:59.045 fused_ordering(648) 00:15:59.045 fused_ordering(649) 00:15:59.045 fused_ordering(650) 00:15:59.045 fused_ordering(651) 00:15:59.045 fused_ordering(652) 00:15:59.045 fused_ordering(653) 00:15:59.045 fused_ordering(654) 00:15:59.045 fused_ordering(655) 00:15:59.045 fused_ordering(656) 00:15:59.045 fused_ordering(657) 00:15:59.045 fused_ordering(658) 00:15:59.045 fused_ordering(659) 00:15:59.045 fused_ordering(660) 00:15:59.045 fused_ordering(661) 00:15:59.045 fused_ordering(662) 00:15:59.045 fused_ordering(663) 00:15:59.045 fused_ordering(664) 00:15:59.045 fused_ordering(665) 00:15:59.045 fused_ordering(666) 00:15:59.045 fused_ordering(667) 00:15:59.045 fused_ordering(668) 00:15:59.045 fused_ordering(669) 00:15:59.045 fused_ordering(670) 00:15:59.045 fused_ordering(671) 00:15:59.045 fused_ordering(672) 00:15:59.045 fused_ordering(673) 00:15:59.045 fused_ordering(674) 00:15:59.045 fused_ordering(675) 00:15:59.045 fused_ordering(676) 00:15:59.045 fused_ordering(677) 00:15:59.045 fused_ordering(678) 00:15:59.045 fused_ordering(679) 00:15:59.045 fused_ordering(680) 00:15:59.045 fused_ordering(681) 00:15:59.045 fused_ordering(682) 00:15:59.045 fused_ordering(683) 00:15:59.045 fused_ordering(684) 00:15:59.045 fused_ordering(685) 00:15:59.045 fused_ordering(686) 00:15:59.045 fused_ordering(687) 00:15:59.045 fused_ordering(688) 00:15:59.045 fused_ordering(689) 00:15:59.045 fused_ordering(690) 00:15:59.045 fused_ordering(691) 00:15:59.045 fused_ordering(692) 00:15:59.045 fused_ordering(693) 00:15:59.045 fused_ordering(694) 00:15:59.045 fused_ordering(695) 00:15:59.045 fused_ordering(696) 00:15:59.045 fused_ordering(697) 00:15:59.045 fused_ordering(698) 00:15:59.045 fused_ordering(699) 00:15:59.045 fused_ordering(700) 00:15:59.045 fused_ordering(701) 00:15:59.045 fused_ordering(702) 00:15:59.045 fused_ordering(703) 00:15:59.045 fused_ordering(704) 00:15:59.045 fused_ordering(705) 00:15:59.045 fused_ordering(706) 00:15:59.045 fused_ordering(707) 00:15:59.045 fused_ordering(708) 00:15:59.045 fused_ordering(709) 00:15:59.045 fused_ordering(710) 00:15:59.045 fused_ordering(711) 00:15:59.045 fused_ordering(712) 00:15:59.045 fused_ordering(713) 00:15:59.045 fused_ordering(714) 00:15:59.045 fused_ordering(715) 00:15:59.045 fused_ordering(716) 00:15:59.045 fused_ordering(717) 00:15:59.045 fused_ordering(718) 00:15:59.045 fused_ordering(719) 00:15:59.045 fused_ordering(720) 00:15:59.045 fused_ordering(721) 00:15:59.045 fused_ordering(722) 00:15:59.045 fused_ordering(723) 00:15:59.045 fused_ordering(724) 00:15:59.045 fused_ordering(725) 00:15:59.045 fused_ordering(726) 00:15:59.045 fused_ordering(727) 00:15:59.045 fused_ordering(728) 00:15:59.045 fused_ordering(729) 00:15:59.045 fused_ordering(730) 00:15:59.045 fused_ordering(731) 00:15:59.045 fused_ordering(732) 00:15:59.045 fused_ordering(733) 00:15:59.045 fused_ordering(734) 00:15:59.045 fused_ordering(735) 00:15:59.045 fused_ordering(736) 00:15:59.045 fused_ordering(737) 00:15:59.045 fused_ordering(738) 00:15:59.045 fused_ordering(739) 00:15:59.045 fused_ordering(740) 00:15:59.045 fused_ordering(741) 00:15:59.045 fused_ordering(742) 00:15:59.045 fused_ordering(743) 00:15:59.045 fused_ordering(744) 00:15:59.045 fused_ordering(745) 00:15:59.045 fused_ordering(746) 00:15:59.045 fused_ordering(747) 00:15:59.045 fused_ordering(748) 00:15:59.045 fused_ordering(749) 00:15:59.045 fused_ordering(750) 00:15:59.045 fused_ordering(751) 00:15:59.045 fused_ordering(752) 00:15:59.045 fused_ordering(753) 00:15:59.045 fused_ordering(754) 00:15:59.045 fused_ordering(755) 00:15:59.045 fused_ordering(756) 00:15:59.045 fused_ordering(757) 00:15:59.045 fused_ordering(758) 00:15:59.045 fused_ordering(759) 00:15:59.045 fused_ordering(760) 00:15:59.045 fused_ordering(761) 00:15:59.045 fused_ordering(762) 00:15:59.045 fused_ordering(763) 00:15:59.045 fused_ordering(764) 00:15:59.045 fused_ordering(765) 00:15:59.045 fused_ordering(766) 00:15:59.045 fused_ordering(767) 00:15:59.045 fused_ordering(768) 00:15:59.045 fused_ordering(769) 00:15:59.045 fused_ordering(770) 00:15:59.045 fused_ordering(771) 00:15:59.045 fused_ordering(772) 00:15:59.045 fused_ordering(773) 00:15:59.045 fused_ordering(774) 00:15:59.045 fused_ordering(775) 00:15:59.045 fused_ordering(776) 00:15:59.045 fused_ordering(777) 00:15:59.045 fused_ordering(778) 00:15:59.045 fused_ordering(779) 00:15:59.045 fused_ordering(780) 00:15:59.045 fused_ordering(781) 00:15:59.045 fused_ordering(782) 00:15:59.045 fused_ordering(783) 00:15:59.045 fused_ordering(784) 00:15:59.045 fused_ordering(785) 00:15:59.045 fused_ordering(786) 00:15:59.045 fused_ordering(787) 00:15:59.045 fused_ordering(788) 00:15:59.045 fused_ordering(789) 00:15:59.045 fused_ordering(790) 00:15:59.045 fused_ordering(791) 00:15:59.045 fused_ordering(792) 00:15:59.045 fused_ordering(793) 00:15:59.045 fused_ordering(794) 00:15:59.045 fused_ordering(795) 00:15:59.045 fused_ordering(796) 00:15:59.045 fused_ordering(797) 00:15:59.045 fused_ordering(798) 00:15:59.045 fused_ordering(799) 00:15:59.045 fused_ordering(800) 00:15:59.045 fused_ordering(801) 00:15:59.045 fused_ordering(802) 00:15:59.045 fused_ordering(803) 00:15:59.045 fused_ordering(804) 00:15:59.045 fused_ordering(805) 00:15:59.045 fused_ordering(806) 00:15:59.045 fused_ordering(807) 00:15:59.045 fused_ordering(808) 00:15:59.045 fused_ordering(809) 00:15:59.045 fused_ordering(810) 00:15:59.045 fused_ordering(811) 00:15:59.045 fused_ordering(812) 00:15:59.045 fused_ordering(813) 00:15:59.045 fused_ordering(814) 00:15:59.045 fused_ordering(815) 00:15:59.045 fused_ordering(816) 00:15:59.045 fused_ordering(817) 00:15:59.045 fused_ordering(818) 00:15:59.045 fused_ordering(819) 00:15:59.045 fused_ordering(820) 00:15:59.617 fused_ordering(821) 00:15:59.617 fused_ordering(822) 00:15:59.617 fused_ordering(823) 00:15:59.617 fused_ordering(824) 00:15:59.617 fused_ordering(825) 00:15:59.617 fused_ordering(826) 00:15:59.617 fused_ordering(827) 00:15:59.617 fused_ordering(828) 00:15:59.617 fused_ordering(829) 00:15:59.617 fused_ordering(830) 00:15:59.617 fused_ordering(831) 00:15:59.617 fused_ordering(832) 00:15:59.617 fused_ordering(833) 00:15:59.617 fused_ordering(834) 00:15:59.617 fused_ordering(835) 00:15:59.617 fused_ordering(836) 00:15:59.617 fused_ordering(837) 00:15:59.617 fused_ordering(838) 00:15:59.617 fused_ordering(839) 00:15:59.617 fused_ordering(840) 00:15:59.617 fused_ordering(841) 00:15:59.617 fused_ordering(842) 00:15:59.617 fused_ordering(843) 00:15:59.617 fused_ordering(844) 00:15:59.617 fused_ordering(845) 00:15:59.617 fused_ordering(846) 00:15:59.617 fused_ordering(847) 00:15:59.617 fused_ordering(848) 00:15:59.617 fused_ordering(849) 00:15:59.617 fused_ordering(850) 00:15:59.617 fused_ordering(851) 00:15:59.617 fused_ordering(852) 00:15:59.617 fused_ordering(853) 00:15:59.617 fused_ordering(854) 00:15:59.617 fused_ordering(855) 00:15:59.617 fused_ordering(856) 00:15:59.617 fused_ordering(857) 00:15:59.617 fused_ordering(858) 00:15:59.617 fused_ordering(859) 00:15:59.617 fused_ordering(860) 00:15:59.617 fused_ordering(861) 00:15:59.617 fused_ordering(862) 00:15:59.617 fused_ordering(863) 00:15:59.617 fused_ordering(864) 00:15:59.617 fused_ordering(865) 00:15:59.617 fused_ordering(866) 00:15:59.617 fused_ordering(867) 00:15:59.617 fused_ordering(868) 00:15:59.617 fused_ordering(869) 00:15:59.617 fused_ordering(870) 00:15:59.617 fused_ordering(871) 00:15:59.617 fused_ordering(872) 00:15:59.617 fused_ordering(873) 00:15:59.617 fused_ordering(874) 00:15:59.617 fused_ordering(875) 00:15:59.617 fused_ordering(876) 00:15:59.617 fused_ordering(877) 00:15:59.617 fused_ordering(878) 00:15:59.617 fused_ordering(879) 00:15:59.617 fused_ordering(880) 00:15:59.617 fused_ordering(881) 00:15:59.617 fused_ordering(882) 00:15:59.617 fused_ordering(883) 00:15:59.617 fused_ordering(884) 00:15:59.617 fused_ordering(885) 00:15:59.617 fused_ordering(886) 00:15:59.617 fused_ordering(887) 00:15:59.617 fused_ordering(888) 00:15:59.617 fused_ordering(889) 00:15:59.617 fused_ordering(890) 00:15:59.617 fused_ordering(891) 00:15:59.617 fused_ordering(892) 00:15:59.617 fused_ordering(893) 00:15:59.617 fused_ordering(894) 00:15:59.617 fused_ordering(895) 00:15:59.617 fused_ordering(896) 00:15:59.617 fused_ordering(897) 00:15:59.617 fused_ordering(898) 00:15:59.617 fused_ordering(899) 00:15:59.617 fused_ordering(900) 00:15:59.617 fused_ordering(901) 00:15:59.617 fused_ordering(902) 00:15:59.617 fused_ordering(903) 00:15:59.617 fused_ordering(904) 00:15:59.617 fused_ordering(905) 00:15:59.617 fused_ordering(906) 00:15:59.617 fused_ordering(907) 00:15:59.617 fused_ordering(908) 00:15:59.617 fused_ordering(909) 00:15:59.617 fused_ordering(910) 00:15:59.617 fused_ordering(911) 00:15:59.617 fused_ordering(912) 00:15:59.617 fused_ordering(913) 00:15:59.617 fused_ordering(914) 00:15:59.617 fused_ordering(915) 00:15:59.617 fused_ordering(916) 00:15:59.617 fused_ordering(917) 00:15:59.617 fused_ordering(918) 00:15:59.617 fused_ordering(919) 00:15:59.617 fused_ordering(920) 00:15:59.617 fused_ordering(921) 00:15:59.617 fused_ordering(922) 00:15:59.617 fused_ordering(923) 00:15:59.617 fused_ordering(924) 00:15:59.617 fused_ordering(925) 00:15:59.617 fused_ordering(926) 00:15:59.617 fused_ordering(927) 00:15:59.617 fused_ordering(928) 00:15:59.617 fused_ordering(929) 00:15:59.617 fused_ordering(930) 00:15:59.617 fused_ordering(931) 00:15:59.617 fused_ordering(932) 00:15:59.617 fused_ordering(933) 00:15:59.617 fused_ordering(934) 00:15:59.617 fused_ordering(935) 00:15:59.617 fused_ordering(936) 00:15:59.617 fused_ordering(937) 00:15:59.617 fused_ordering(938) 00:15:59.617 fused_ordering(939) 00:15:59.617 fused_ordering(940) 00:15:59.617 fused_ordering(941) 00:15:59.617 fused_ordering(942) 00:15:59.617 fused_ordering(943) 00:15:59.617 fused_ordering(944) 00:15:59.617 fused_ordering(945) 00:15:59.617 fused_ordering(946) 00:15:59.617 fused_ordering(947) 00:15:59.617 fused_ordering(948) 00:15:59.617 fused_ordering(949) 00:15:59.617 fused_ordering(950) 00:15:59.617 fused_ordering(951) 00:15:59.617 fused_ordering(952) 00:15:59.617 fused_ordering(953) 00:15:59.617 fused_ordering(954) 00:15:59.617 fused_ordering(955) 00:15:59.617 fused_ordering(956) 00:15:59.617 fused_ordering(957) 00:15:59.617 fused_ordering(958) 00:15:59.617 fused_ordering(959) 00:15:59.617 fused_ordering(960) 00:15:59.617 fused_ordering(961) 00:15:59.617 fused_ordering(962) 00:15:59.617 fused_ordering(963) 00:15:59.617 fused_ordering(964) 00:15:59.617 fused_ordering(965) 00:15:59.617 fused_ordering(966) 00:15:59.617 fused_ordering(967) 00:15:59.617 fused_ordering(968) 00:15:59.617 fused_ordering(969) 00:15:59.618 fused_ordering(970) 00:15:59.618 fused_ordering(971) 00:15:59.618 fused_ordering(972) 00:15:59.618 fused_ordering(973) 00:15:59.618 fused_ordering(974) 00:15:59.618 fused_ordering(975) 00:15:59.618 fused_ordering(976) 00:15:59.618 fused_ordering(977) 00:15:59.618 fused_ordering(978) 00:15:59.618 fused_ordering(979) 00:15:59.618 fused_ordering(980) 00:15:59.618 fused_ordering(981) 00:15:59.618 fused_ordering(982) 00:15:59.618 fused_ordering(983) 00:15:59.618 fused_ordering(984) 00:15:59.618 fused_ordering(985) 00:15:59.618 fused_ordering(986) 00:15:59.618 fused_ordering(987) 00:15:59.618 fused_ordering(988) 00:15:59.618 fused_ordering(989) 00:15:59.618 fused_ordering(990) 00:15:59.618 fused_ordering(991) 00:15:59.618 fused_ordering(992) 00:15:59.618 fused_ordering(993) 00:15:59.618 fused_ordering(994) 00:15:59.618 fused_ordering(995) 00:15:59.618 fused_ordering(996) 00:15:59.618 fused_ordering(997) 00:15:59.618 fused_ordering(998) 00:15:59.618 fused_ordering(999) 00:15:59.618 fused_ordering(1000) 00:15:59.618 fused_ordering(1001) 00:15:59.618 fused_ordering(1002) 00:15:59.618 fused_ordering(1003) 00:15:59.618 fused_ordering(1004) 00:15:59.618 fused_ordering(1005) 00:15:59.618 fused_ordering(1006) 00:15:59.618 fused_ordering(1007) 00:15:59.618 fused_ordering(1008) 00:15:59.618 fused_ordering(1009) 00:15:59.618 fused_ordering(1010) 00:15:59.618 fused_ordering(1011) 00:15:59.618 fused_ordering(1012) 00:15:59.618 fused_ordering(1013) 00:15:59.618 fused_ordering(1014) 00:15:59.618 fused_ordering(1015) 00:15:59.618 fused_ordering(1016) 00:15:59.618 fused_ordering(1017) 00:15:59.618 fused_ordering(1018) 00:15:59.618 fused_ordering(1019) 00:15:59.618 fused_ordering(1020) 00:15:59.618 fused_ordering(1021) 00:15:59.618 fused_ordering(1022) 00:15:59.618 fused_ordering(1023) 00:15:59.618 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:59.618 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:59.618 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:59.618 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:59.618 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:59.618 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:59.618 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:59.618 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:59.618 rmmod nvme_tcp 00:15:59.618 rmmod nvme_fabrics 00:15:59.879 rmmod nvme_keyring 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 3299622 ']' 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 3299622 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' -z 3299622 ']' 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # kill -0 3299622 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # uname 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3299622 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3299622' 00:15:59.879 killing process with pid 3299622 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # kill 3299622 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@977 -- # wait 3299622 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.879 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:02.430 00:16:02.430 real 0m13.750s 00:16:02.430 user 0m7.212s 00:16:02.430 sys 0m7.424s 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:02.430 ************************************ 00:16:02.430 END TEST nvmf_fused_ordering 00:16:02.430 ************************************ 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.430 ************************************ 00:16:02.430 START TEST nvmf_ns_masking 00:16:02.430 ************************************ 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:02.430 * Looking for test storage... 00:16:02.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1626 -- # lcov --version 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:16:02.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.430 --rc genhtml_branch_coverage=1 00:16:02.430 --rc genhtml_function_coverage=1 00:16:02.430 --rc genhtml_legend=1 00:16:02.430 --rc geninfo_all_blocks=1 00:16:02.430 --rc geninfo_unexecuted_blocks=1 00:16:02.430 00:16:02.430 ' 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:16:02.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.430 --rc genhtml_branch_coverage=1 00:16:02.430 --rc genhtml_function_coverage=1 00:16:02.430 --rc genhtml_legend=1 00:16:02.430 --rc geninfo_all_blocks=1 00:16:02.430 --rc geninfo_unexecuted_blocks=1 00:16:02.430 00:16:02.430 ' 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:16:02.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.430 --rc genhtml_branch_coverage=1 00:16:02.430 --rc genhtml_function_coverage=1 00:16:02.430 --rc genhtml_legend=1 00:16:02.430 --rc geninfo_all_blocks=1 00:16:02.430 --rc geninfo_unexecuted_blocks=1 00:16:02.430 00:16:02.430 ' 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:16:02.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.430 --rc genhtml_branch_coverage=1 00:16:02.430 --rc genhtml_function_coverage=1 00:16:02.430 --rc genhtml_legend=1 00:16:02.430 --rc geninfo_all_blocks=1 00:16:02.430 --rc geninfo_unexecuted_blocks=1 00:16:02.430 00:16:02.430 ' 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.430 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=dd8e14e4-f988-4096-8390-be8f5d138a2a 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6643cfa5-d14d-4631-a028-4f436ef5e912 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4d9b4c0d-6dbf-458d-867a-7865973b0082 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:02.431 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:10.574 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:10.574 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:10.574 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:10.575 Found net devices under 0000:31:00.0: cvl_0_0 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:10.575 Found net devices under 0000:31:00.1: cvl_0_1 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:10.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:16:10.575 00:16:10.575 --- 10.0.0.2 ping statistics --- 00:16:10.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.575 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:16:10.575 00:16:10.575 --- 10.0.0.1 ping statistics --- 00:16:10.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.575 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=3304551 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 3304551 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # '[' -z 3304551 ']' 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:10.575 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:10.575 [2024-10-07 09:37:09.764222] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:10.575 [2024-10-07 09:37:09.764286] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.575 [2024-10-07 09:37:09.854326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.575 [2024-10-07 09:37:09.949422] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.575 [2024-10-07 09:37:09.949490] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.575 [2024-10-07 09:37:09.949500] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.575 [2024-10-07 09:37:09.949507] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.575 [2024-10-07 09:37:09.949514] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.575 [2024-10-07 09:37:09.950326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.147 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:11.147 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@867 -- # return 0 00:16:11.147 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:11.147 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@733 -- # xtrace_disable 00:16:11.147 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:11.147 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.147 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:11.147 [2024-10-07 09:37:10.789172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.407 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:11.407 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:11.407 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:11.407 Malloc1 00:16:11.407 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:11.667 Malloc2 00:16:11.667 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:11.928 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:12.190 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.190 [2024-10-07 09:37:11.805899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.190 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:12.190 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d9b4c0d-6dbf-458d-867a-7865973b0082 -a 10.0.0.2 -s 4420 -i 4 00:16:12.452 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:12.452 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local i=0 00:16:12.452 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.452 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:16:12.452 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # sleep 2 00:16:14.389 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:16:14.389 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:16:14.389 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.389 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:16:14.389 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.389 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # return 0 00:16:14.389 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:14.389 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:14.650 [ 0]:0x1 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f2f3a90260d3462bb4c2e2e47a7afee6 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f2f3a90260d3462bb4c2e2e47a7afee6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:14.650 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:14.910 [ 0]:0x1 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f2f3a90260d3462bb4c2e2e47a7afee6 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f2f3a90260d3462bb4c2e2e47a7afee6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:14.910 [ 1]:0x2 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c52c94522ac43bb8937ff9ecc2b97e8 00:16:14.910 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c52c94522ac43bb8937ff9ecc2b97e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:14.911 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:14.911 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.171 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.432 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:15.432 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:15.432 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d9b4c0d-6dbf-458d-867a-7865973b0082 -a 10.0.0.2 -s 4420 -i 4 00:16:15.692 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:15.692 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local i=0 00:16:15.692 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.692 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # [[ -n 1 ]] 00:16:15.692 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_device_counter=1 00:16:15.692 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # sleep 2 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # return 0 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # local es=0 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # valid_exec_arg ns_is_visible 0x1 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@641 -- # local arg=ns_is_visible 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # type -t ns_is_visible 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@656 -- # ns_is_visible 0x1 00:16:17.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@656 -- # es=1 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:17.867 [ 0]:0x2 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c52c94522ac43bb8937ff9ecc2b97e8 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c52c94522ac43bb8937ff9ecc2b97e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:17.867 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:18.129 [ 0]:0x1 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f2f3a90260d3462bb4c2e2e47a7afee6 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f2f3a90260d3462bb4c2e2e47a7afee6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:18.129 [ 1]:0x2 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c52c94522ac43bb8937ff9ecc2b97e8 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c52c94522ac43bb8937ff9ecc2b97e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.129 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # local es=0 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # valid_exec_arg ns_is_visible 0x1 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@641 -- # local arg=ns_is_visible 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # type -t ns_is_visible 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@656 -- # ns_is_visible 0x1 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@656 -- # es=1 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:18.391 [ 0]:0x2 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c52c94522ac43bb8937ff9ecc2b97e8 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c52c94522ac43bb8937ff9ecc2b97e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:18.391 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.391 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:18.652 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:18.652 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d9b4c0d-6dbf-458d-867a-7865973b0082 -a 10.0.0.2 -s 4420 -i 4 00:16:18.918 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:18.918 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local i=0 00:16:18.919 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.919 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # [[ -n 2 ]] 00:16:18.919 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_device_counter=2 00:16:18.919 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # sleep 2 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # nvme_devices=2 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # return 0 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:20.881 [ 0]:0x1 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f2f3a90260d3462bb4c2e2e47a7afee6 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f2f3a90260d3462bb4c2e2e47a7afee6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:20.881 [ 1]:0x2 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c52c94522ac43bb8937ff9ecc2b97e8 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c52c94522ac43bb8937ff9ecc2b97e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:20.881 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # local es=0 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # valid_exec_arg ns_is_visible 0x1 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@641 -- # local arg=ns_is_visible 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # type -t ns_is_visible 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@656 -- # ns_is_visible 0x1 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@656 -- # es=1 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.142 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:21.402 [ 0]:0x2 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c52c94522ac43bb8937ff9ecc2b97e8 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c52c94522ac43bb8937ff9ecc2b97e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # local es=0 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@641 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@647 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@647 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@647 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:21.403 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@656 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:21.403 [2024-10-07 09:37:21.039435] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:21.403 request: 00:16:21.403 { 00:16:21.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.403 "nsid": 2, 00:16:21.403 "host": "nqn.2016-06.io.spdk:host1", 00:16:21.403 "method": "nvmf_ns_remove_host", 00:16:21.403 "req_id": 1 00:16:21.403 } 00:16:21.403 Got JSON-RPC error response 00:16:21.403 response: 00:16:21.403 { 00:16:21.403 "code": -32602, 00:16:21.403 "message": "Invalid parameters" 00:16:21.403 } 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@656 -- # es=1 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # local es=0 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # valid_exec_arg ns_is_visible 0x1 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@641 -- # local arg=ns_is_visible 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # type -t ns_is_visible 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@656 -- # ns_is_visible 0x1 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.403 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@656 -- # es=1 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:21.663 [ 0]:0x2 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c52c94522ac43bb8937ff9ecc2b97e8 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c52c94522ac43bb8937ff9ecc2b97e8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:21.663 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.923 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3306995 00:16:21.923 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.923 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:21.923 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3306995 /var/tmp/host.sock 00:16:21.923 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # '[' -z 3306995 ']' 00:16:21.923 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/host.sock 00:16:21.923 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:21.923 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:21.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:21.923 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:21.923 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:21.923 [2024-10-07 09:37:21.421307] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:21.923 [2024-10-07 09:37:21.421356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3306995 ] 00:16:21.923 [2024-10-07 09:37:21.500329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.923 [2024-10-07 09:37:21.565381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.864 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:22.864 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@867 -- # return 0 00:16:22.864 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.864 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:23.124 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid dd8e14e4-f988-4096-8390-be8f5d138a2a 00:16:23.124 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:16:23.124 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DD8E14E4F98840968390BE8F5D138A2A -i 00:16:23.124 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6643cfa5-d14d-4631-a028-4f436ef5e912 00:16:23.124 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:16:23.124 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6643CFA5D14D4631A0284F436EF5E912 -i 00:16:23.386 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:23.646 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:23.646 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:23.646 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:24.217 nvme0n1 00:16:24.217 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:24.217 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:24.478 nvme1n2 00:16:24.478 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:24.478 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:24.478 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:24.478 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:24.478 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:24.739 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:24.739 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:24.739 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:24.739 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ dd8e14e4-f988-4096-8390-be8f5d138a2a == \d\d\8\e\1\4\e\4\-\f\9\8\8\-\4\0\9\6\-\8\3\9\0\-\b\e\8\f\5\d\1\3\8\a\2\a ]] 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6643cfa5-d14d-4631-a028-4f436ef5e912 == \6\6\4\3\c\f\a\5\-\d\1\4\d\-\4\6\3\1\-\a\0\2\8\-\4\f\4\3\6\e\f\5\e\9\1\2 ]] 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3306995 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' -z 3306995 ']' 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # kill -0 3306995 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # uname 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:16:25.002 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3306995 00:16:25.264 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:16:25.264 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:16:25.264 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3306995' 00:16:25.264 killing process with pid 3306995 00:16:25.264 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # kill 3306995 00:16:25.264 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@977 -- # wait 3306995 00:16:25.526 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.526 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:25.526 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:25.526 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:25.526 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:25.526 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.526 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:25.526 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.526 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.526 rmmod nvme_tcp 00:16:25.526 rmmod nvme_fabrics 00:16:25.526 rmmod nvme_keyring 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 3304551 ']' 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 3304551 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' -z 3304551 ']' 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # kill -0 3304551 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # uname 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3304551 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3304551' 00:16:25.788 killing process with pid 3304551 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # kill 3304551 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@977 -- # wait 3304551 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.788 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:28.337 00:16:28.337 real 0m25.832s 00:16:28.337 user 0m26.269s 00:16:28.337 sys 0m8.111s 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:28.337 ************************************ 00:16:28.337 END TEST nvmf_ns_masking 00:16:28.337 ************************************ 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.337 ************************************ 00:16:28.337 START TEST nvmf_nvme_cli 00:16:28.337 ************************************ 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:28.337 * Looking for test storage... 00:16:28.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1626 -- # lcov --version 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:28.337 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:16:28.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.338 --rc genhtml_branch_coverage=1 00:16:28.338 --rc genhtml_function_coverage=1 00:16:28.338 --rc genhtml_legend=1 00:16:28.338 --rc geninfo_all_blocks=1 00:16:28.338 --rc geninfo_unexecuted_blocks=1 00:16:28.338 00:16:28.338 ' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:16:28.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.338 --rc genhtml_branch_coverage=1 00:16:28.338 --rc genhtml_function_coverage=1 00:16:28.338 --rc genhtml_legend=1 00:16:28.338 --rc geninfo_all_blocks=1 00:16:28.338 --rc geninfo_unexecuted_blocks=1 00:16:28.338 00:16:28.338 ' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:16:28.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.338 --rc genhtml_branch_coverage=1 00:16:28.338 --rc genhtml_function_coverage=1 00:16:28.338 --rc genhtml_legend=1 00:16:28.338 --rc geninfo_all_blocks=1 00:16:28.338 --rc geninfo_unexecuted_blocks=1 00:16:28.338 00:16:28.338 ' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:16:28.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.338 --rc genhtml_branch_coverage=1 00:16:28.338 --rc genhtml_function_coverage=1 00:16:28.338 --rc genhtml_legend=1 00:16:28.338 --rc geninfo_all_blocks=1 00:16:28.338 --rc geninfo_unexecuted_blocks=1 00:16:28.338 00:16:28.338 ' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.338 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:28.339 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:28.339 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:28.339 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.481 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:36.481 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:36.482 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:36.482 Found net devices under 0000:31:00.0: cvl_0_0 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:36.482 Found net devices under 0000:31:00.1: cvl_0_1 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:16:36.482 00:16:36.482 --- 10.0.0.2 ping statistics --- 00:16:36.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.482 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:16:36.482 00:16:36.482 --- 10.0.0.1 ping statistics --- 00:16:36.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.482 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=3312103 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 3312103 00:16:36.482 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.483 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # '[' -z 3312103 ']' 00:16:36.483 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.483 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:36.483 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.483 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:36.483 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:36.483 [2024-10-07 09:37:35.659355] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:36.483 [2024-10-07 09:37:35.659416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.483 [2024-10-07 09:37:35.748973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.483 [2024-10-07 09:37:35.845781] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.483 [2024-10-07 09:37:35.845845] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.483 [2024-10-07 09:37:35.845853] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.483 [2024-10-07 09:37:35.845861] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.483 [2024-10-07 09:37:35.845867] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.483 [2024-10-07 09:37:35.847946] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.483 [2024-10-07 09:37:35.848108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.483 [2024-10-07 09:37:35.848269] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.483 [2024-10-07 09:37:35.848269] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@867 -- # return 0 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@733 -- # xtrace_disable 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.057 [2024-10-07 09:37:36.537643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.057 Malloc0 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.057 Malloc1 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.057 [2024-10-07 09:37:36.639939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:37.057 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:16:37.318 00:16:37.318 Discovery Log Number of Records 2, Generation counter 2 00:16:37.318 =====Discovery Log Entry 0====== 00:16:37.318 trtype: tcp 00:16:37.318 adrfam: ipv4 00:16:37.318 subtype: current discovery subsystem 00:16:37.318 treq: not required 00:16:37.318 portid: 0 00:16:37.318 trsvcid: 4420 00:16:37.318 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:37.318 traddr: 10.0.0.2 00:16:37.318 eflags: explicit discovery connections, duplicate discovery information 00:16:37.318 sectype: none 00:16:37.318 =====Discovery Log Entry 1====== 00:16:37.318 trtype: tcp 00:16:37.318 adrfam: ipv4 00:16:37.318 subtype: nvme subsystem 00:16:37.318 treq: not required 00:16:37.318 portid: 0 00:16:37.318 trsvcid: 4420 00:16:37.318 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:37.318 traddr: 10.0.0.2 00:16:37.318 eflags: none 00:16:37.318 sectype: none 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:37.318 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.233 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:39.233 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local i=0 00:16:39.233 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.233 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # [[ -n 2 ]] 00:16:39.233 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # nvme_device_counter=2 00:16:39.233 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # sleep 2 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # nvme_devices=2 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # return 0 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:41.148 /dev/nvme0n2 ]] 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # local i=0 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1234 -- # return 0 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:41.148 rmmod nvme_tcp 00:16:41.148 rmmod nvme_fabrics 00:16:41.148 rmmod nvme_keyring 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 3312103 ']' 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 3312103 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' -z 3312103 ']' 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # kill -0 3312103 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # uname 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3312103 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3312103' 00:16:41.148 killing process with pid 3312103 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # kill 3312103 00:16:41.148 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@977 -- # wait 3312103 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.410 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.961 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:43.961 00:16:43.961 real 0m15.417s 00:16:43.961 user 0m22.560s 00:16:43.961 sys 0m6.574s 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.961 ************************************ 00:16:43.961 END TEST nvmf_nvme_cli 00:16:43.961 ************************************ 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:43.961 ************************************ 00:16:43.961 START TEST nvmf_vfio_user 00:16:43.961 ************************************ 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:43.961 * Looking for test storage... 00:16:43.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1626 -- # lcov --version 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:43.961 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:16:43.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.962 --rc genhtml_branch_coverage=1 00:16:43.962 --rc genhtml_function_coverage=1 00:16:43.962 --rc genhtml_legend=1 00:16:43.962 --rc geninfo_all_blocks=1 00:16:43.962 --rc geninfo_unexecuted_blocks=1 00:16:43.962 00:16:43.962 ' 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:16:43.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.962 --rc genhtml_branch_coverage=1 00:16:43.962 --rc genhtml_function_coverage=1 00:16:43.962 --rc genhtml_legend=1 00:16:43.962 --rc geninfo_all_blocks=1 00:16:43.962 --rc geninfo_unexecuted_blocks=1 00:16:43.962 00:16:43.962 ' 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:16:43.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.962 --rc genhtml_branch_coverage=1 00:16:43.962 --rc genhtml_function_coverage=1 00:16:43.962 --rc genhtml_legend=1 00:16:43.962 --rc geninfo_all_blocks=1 00:16:43.962 --rc geninfo_unexecuted_blocks=1 00:16:43.962 00:16:43.962 ' 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:16:43.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.962 --rc genhtml_branch_coverage=1 00:16:43.962 --rc genhtml_function_coverage=1 00:16:43.962 --rc genhtml_legend=1 00:16:43.962 --rc geninfo_all_blocks=1 00:16:43.962 --rc geninfo_unexecuted_blocks=1 00:16:43.962 00:16:43.962 ' 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.962 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:43.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3313767 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3313767' 00:16:43.963 Process pid: 3313767 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3313767 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # '[' -z 3313767 ']' 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:43.963 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:43.963 [2024-10-07 09:37:43.437233] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:43.963 [2024-10-07 09:37:43.437314] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.963 [2024-10-07 09:37:43.517064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.963 [2024-10-07 09:37:43.578608] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.963 [2024-10-07 09:37:43.578650] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.963 [2024-10-07 09:37:43.578656] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.963 [2024-10-07 09:37:43.578663] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.963 [2024-10-07 09:37:43.578667] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.963 [2024-10-07 09:37:43.580106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.963 [2024-10-07 09:37:43.580263] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.963 [2024-10-07 09:37:43.580380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.963 [2024-10-07 09:37:43.580381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.909 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:44.909 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@867 -- # return 0 00:16:44.909 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:45.852 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:45.852 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:45.852 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:45.852 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:45.852 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:45.852 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:46.113 Malloc1 00:16:46.113 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:46.374 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:46.374 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:46.635 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:46.635 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:46.635 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:46.896 Malloc2 00:16:46.896 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:47.157 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:47.157 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:47.420 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:47.420 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:47.420 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:47.420 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:47.420 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:47.420 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:47.420 [2024-10-07 09:37:46.958708] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:47.420 [2024-10-07 09:37:46.958749] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314515 ] 00:16:47.420 [2024-10-07 09:37:46.987724] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:47.420 [2024-10-07 09:37:46.999870] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:47.421 [2024-10-07 09:37:46.999888] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f83c851c000 00:16:47.421 [2024-10-07 09:37:47.000864] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:47.421 [2024-10-07 09:37:47.001869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:47.421 [2024-10-07 09:37:47.002871] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:47.421 [2024-10-07 09:37:47.003874] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:47.421 [2024-10-07 09:37:47.004888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:47.421 [2024-10-07 09:37:47.005889] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:47.421 [2024-10-07 09:37:47.006899] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:47.421 [2024-10-07 09:37:47.007908] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:47.421 [2024-10-07 09:37:47.008914] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:47.421 [2024-10-07 09:37:47.008922] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f83c8511000 00:16:47.421 [2024-10-07 09:37:47.009839] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:47.421 [2024-10-07 09:37:47.019291] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:47.421 [2024-10-07 09:37:47.019311] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:47.421 [2024-10-07 09:37:47.024005] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:47.421 [2024-10-07 09:37:47.024038] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:47.421 [2024-10-07 09:37:47.024097] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:47.421 [2024-10-07 09:37:47.024111] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:47.421 [2024-10-07 09:37:47.024115] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:47.421 [2024-10-07 09:37:47.025006] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:47.421 [2024-10-07 09:37:47.025013] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:47.421 [2024-10-07 09:37:47.025018] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:47.421 [2024-10-07 09:37:47.026005] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:47.421 [2024-10-07 09:37:47.026012] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:47.421 [2024-10-07 09:37:47.026017] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:47.421 [2024-10-07 09:37:47.027010] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:47.421 [2024-10-07 09:37:47.027017] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:47.421 [2024-10-07 09:37:47.028015] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:47.421 [2024-10-07 09:37:47.028022] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:47.421 [2024-10-07 09:37:47.028025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:47.421 [2024-10-07 09:37:47.028030] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:47.421 [2024-10-07 09:37:47.028134] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:47.421 [2024-10-07 09:37:47.028137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:47.421 [2024-10-07 09:37:47.028141] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:47.421 [2024-10-07 09:37:47.029021] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:47.421 [2024-10-07 09:37:47.030032] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:47.421 [2024-10-07 09:37:47.031042] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:47.421 [2024-10-07 09:37:47.032044] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:47.421 [2024-10-07 09:37:47.032095] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:47.421 [2024-10-07 09:37:47.033057] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:47.421 [2024-10-07 09:37:47.033063] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:47.421 [2024-10-07 09:37:47.033067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:47.421 [2024-10-07 09:37:47.033081] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:47.421 [2024-10-07 09:37:47.033091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:47.421 [2024-10-07 09:37:47.033102] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:47.421 [2024-10-07 09:37:47.033106] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:47.421 [2024-10-07 09:37:47.033109] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:47.421 [2024-10-07 09:37:47.033118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:47.421 [2024-10-07 09:37:47.033149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:47.421 [2024-10-07 09:37:47.033156] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:47.421 [2024-10-07 09:37:47.033159] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:47.421 [2024-10-07 09:37:47.033163] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:47.421 [2024-10-07 09:37:47.033166] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:47.421 [2024-10-07 09:37:47.033169] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:47.421 [2024-10-07 09:37:47.033173] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:47.421 [2024-10-07 09:37:47.033176] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:47.421 [2024-10-07 09:37:47.033181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:47.421 [2024-10-07 09:37:47.033188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:47.421 [2024-10-07 09:37:47.033203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:47.421 [2024-10-07 09:37:47.033210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.421 [2024-10-07 09:37:47.033217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.421 [2024-10-07 09:37:47.033223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.421 [2024-10-07 09:37:47.033229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.421 [2024-10-07 09:37:47.033232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:47.421 [2024-10-07 09:37:47.033238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:47.421 [2024-10-07 09:37:47.033245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:47.421 [2024-10-07 09:37:47.033255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:47.421 [2024-10-07 09:37:47.033258] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:47.421 [2024-10-07 09:37:47.033262] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:47.421 [2024-10-07 09:37:47.033269] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:47.421 [2024-10-07 09:37:47.033275] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:47.421 [2024-10-07 09:37:47.033281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:47.421 [2024-10-07 09:37:47.033290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:47.421 [2024-10-07 09:37:47.033332] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033338] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033343] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:47.422 [2024-10-07 09:37:47.033346] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:47.422 [2024-10-07 09:37:47.033348] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:47.422 [2024-10-07 09:37:47.033353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:47.422 [2024-10-07 09:37:47.033367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:47.422 [2024-10-07 09:37:47.033373] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:47.422 [2024-10-07 09:37:47.033379] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033385] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033390] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:47.422 [2024-10-07 09:37:47.033393] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:47.422 [2024-10-07 09:37:47.033395] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:47.422 [2024-10-07 09:37:47.033400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:47.422 [2024-10-07 09:37:47.033415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:47.422 [2024-10-07 09:37:47.033424] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033430] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033434] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:47.422 [2024-10-07 09:37:47.033437] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:47.422 [2024-10-07 09:37:47.033440] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:47.422 [2024-10-07 09:37:47.033444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:47.422 [2024-10-07 09:37:47.033457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:47.422 [2024-10-07 09:37:47.033464] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033474] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033478] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033481] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033485] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033489] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:47.422 [2024-10-07 09:37:47.033492] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:47.422 [2024-10-07 09:37:47.033495] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:47.422 [2024-10-07 09:37:47.033508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:47.422 [2024-10-07 09:37:47.033516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:47.422 [2024-10-07 09:37:47.033524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:47.422 [2024-10-07 09:37:47.033533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:47.422 [2024-10-07 09:37:47.033541] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:47.422 [2024-10-07 09:37:47.033549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:47.422 [2024-10-07 09:37:47.033557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:47.422 [2024-10-07 09:37:47.033564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:47.422 [2024-10-07 09:37:47.033573] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:47.422 [2024-10-07 09:37:47.033576] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:47.422 [2024-10-07 09:37:47.033579] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:47.422 [2024-10-07 09:37:47.033581] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:47.422 [2024-10-07 09:37:47.033584] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:47.422 [2024-10-07 09:37:47.033588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:47.422 [2024-10-07 09:37:47.033594] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:47.422 [2024-10-07 09:37:47.033597] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:47.422 [2024-10-07 09:37:47.033599] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:47.422 [2024-10-07 09:37:47.033603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:47.422 [2024-10-07 09:37:47.033610] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:47.422 [2024-10-07 09:37:47.033613] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:47.422 [2024-10-07 09:37:47.033621] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:47.422 [2024-10-07 09:37:47.033625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:47.422 [2024-10-07 09:37:47.033631] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:47.422 [2024-10-07 09:37:47.033634] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:47.422 [2024-10-07 09:37:47.033636] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:47.422 [2024-10-07 09:37:47.033640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:47.422 [2024-10-07 09:37:47.033645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:47.422 [2024-10-07 09:37:47.033654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:47.422 [2024-10-07 09:37:47.033662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:47.422 [2024-10-07 09:37:47.033667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:47.422 ===================================================== 00:16:47.422 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:47.422 ===================================================== 00:16:47.422 Controller Capabilities/Features 00:16:47.422 ================================ 00:16:47.422 Vendor ID: 4e58 00:16:47.422 Subsystem Vendor ID: 4e58 00:16:47.422 Serial Number: SPDK1 00:16:47.422 Model Number: SPDK bdev Controller 00:16:47.422 Firmware Version: 25.01 00:16:47.422 Recommended Arb Burst: 6 00:16:47.422 IEEE OUI Identifier: 8d 6b 50 00:16:47.422 Multi-path I/O 00:16:47.422 May have multiple subsystem ports: Yes 00:16:47.422 May have multiple controllers: Yes 00:16:47.422 Associated with SR-IOV VF: No 00:16:47.422 Max Data Transfer Size: 131072 00:16:47.422 Max Number of Namespaces: 32 00:16:47.422 Max Number of I/O Queues: 127 00:16:47.422 NVMe Specification Version (VS): 1.3 00:16:47.422 NVMe Specification Version (Identify): 1.3 00:16:47.422 Maximum Queue Entries: 256 00:16:47.422 Contiguous Queues Required: Yes 00:16:47.422 Arbitration Mechanisms Supported 00:16:47.422 Weighted Round Robin: Not Supported 00:16:47.422 Vendor Specific: Not Supported 00:16:47.422 Reset Timeout: 15000 ms 00:16:47.422 Doorbell Stride: 4 bytes 00:16:47.422 NVM Subsystem Reset: Not Supported 00:16:47.422 Command Sets Supported 00:16:47.422 NVM Command Set: Supported 00:16:47.422 Boot Partition: Not Supported 00:16:47.422 Memory Page Size Minimum: 4096 bytes 00:16:47.422 Memory Page Size Maximum: 4096 bytes 00:16:47.422 Persistent Memory Region: Not Supported 00:16:47.422 Optional Asynchronous Events Supported 00:16:47.422 Namespace Attribute Notices: Supported 00:16:47.422 Firmware Activation Notices: Not Supported 00:16:47.422 ANA Change Notices: Not Supported 00:16:47.422 PLE Aggregate Log Change Notices: Not Supported 00:16:47.422 LBA Status Info Alert Notices: Not Supported 00:16:47.423 EGE Aggregate Log Change Notices: Not Supported 00:16:47.423 Normal NVM Subsystem Shutdown event: Not Supported 00:16:47.423 Zone Descriptor Change Notices: Not Supported 00:16:47.423 Discovery Log Change Notices: Not Supported 00:16:47.423 Controller Attributes 00:16:47.423 128-bit Host Identifier: Supported 00:16:47.423 Non-Operational Permissive Mode: Not Supported 00:16:47.423 NVM Sets: Not Supported 00:16:47.423 Read Recovery Levels: Not Supported 00:16:47.423 Endurance Groups: Not Supported 00:16:47.423 Predictable Latency Mode: Not Supported 00:16:47.423 Traffic Based Keep ALive: Not Supported 00:16:47.423 Namespace Granularity: Not Supported 00:16:47.423 SQ Associations: Not Supported 00:16:47.423 UUID List: Not Supported 00:16:47.423 Multi-Domain Subsystem: Not Supported 00:16:47.423 Fixed Capacity Management: Not Supported 00:16:47.423 Variable Capacity Management: Not Supported 00:16:47.423 Delete Endurance Group: Not Supported 00:16:47.423 Delete NVM Set: Not Supported 00:16:47.423 Extended LBA Formats Supported: Not Supported 00:16:47.423 Flexible Data Placement Supported: Not Supported 00:16:47.423 00:16:47.423 Controller Memory Buffer Support 00:16:47.423 ================================ 00:16:47.423 Supported: No 00:16:47.423 00:16:47.423 Persistent Memory Region Support 00:16:47.423 ================================ 00:16:47.423 Supported: No 00:16:47.423 00:16:47.423 Admin Command Set Attributes 00:16:47.423 ============================ 00:16:47.423 Security Send/Receive: Not Supported 00:16:47.423 Format NVM: Not Supported 00:16:47.423 Firmware Activate/Download: Not Supported 00:16:47.423 Namespace Management: Not Supported 00:16:47.423 Device Self-Test: Not Supported 00:16:47.423 Directives: Not Supported 00:16:47.423 NVMe-MI: Not Supported 00:16:47.423 Virtualization Management: Not Supported 00:16:47.423 Doorbell Buffer Config: Not Supported 00:16:47.423 Get LBA Status Capability: Not Supported 00:16:47.423 Command & Feature Lockdown Capability: Not Supported 00:16:47.423 Abort Command Limit: 4 00:16:47.423 Async Event Request Limit: 4 00:16:47.423 Number of Firmware Slots: N/A 00:16:47.423 Firmware Slot 1 Read-Only: N/A 00:16:47.423 Firmware Activation Without Reset: N/A 00:16:47.423 Multiple Update Detection Support: N/A 00:16:47.423 Firmware Update Granularity: No Information Provided 00:16:47.423 Per-Namespace SMART Log: No 00:16:47.423 Asymmetric Namespace Access Log Page: Not Supported 00:16:47.423 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:47.423 Command Effects Log Page: Supported 00:16:47.423 Get Log Page Extended Data: Supported 00:16:47.423 Telemetry Log Pages: Not Supported 00:16:47.423 Persistent Event Log Pages: Not Supported 00:16:47.423 Supported Log Pages Log Page: May Support 00:16:47.423 Commands Supported & Effects Log Page: Not Supported 00:16:47.423 Feature Identifiers & Effects Log Page:May Support 00:16:47.423 NVMe-MI Commands & Effects Log Page: May Support 00:16:47.423 Data Area 4 for Telemetry Log: Not Supported 00:16:47.423 Error Log Page Entries Supported: 128 00:16:47.423 Keep Alive: Supported 00:16:47.423 Keep Alive Granularity: 10000 ms 00:16:47.423 00:16:47.423 NVM Command Set Attributes 00:16:47.423 ========================== 00:16:47.423 Submission Queue Entry Size 00:16:47.423 Max: 64 00:16:47.423 Min: 64 00:16:47.423 Completion Queue Entry Size 00:16:47.423 Max: 16 00:16:47.423 Min: 16 00:16:47.423 Number of Namespaces: 32 00:16:47.423 Compare Command: Supported 00:16:47.423 Write Uncorrectable Command: Not Supported 00:16:47.423 Dataset Management Command: Supported 00:16:47.423 Write Zeroes Command: Supported 00:16:47.423 Set Features Save Field: Not Supported 00:16:47.423 Reservations: Not Supported 00:16:47.423 Timestamp: Not Supported 00:16:47.423 Copy: Supported 00:16:47.423 Volatile Write Cache: Present 00:16:47.423 Atomic Write Unit (Normal): 1 00:16:47.423 Atomic Write Unit (PFail): 1 00:16:47.423 Atomic Compare & Write Unit: 1 00:16:47.423 Fused Compare & Write: Supported 00:16:47.423 Scatter-Gather List 00:16:47.423 SGL Command Set: Supported (Dword aligned) 00:16:47.423 SGL Keyed: Not Supported 00:16:47.423 SGL Bit Bucket Descriptor: Not Supported 00:16:47.423 SGL Metadata Pointer: Not Supported 00:16:47.423 Oversized SGL: Not Supported 00:16:47.423 SGL Metadata Address: Not Supported 00:16:47.423 SGL Offset: Not Supported 00:16:47.423 Transport SGL Data Block: Not Supported 00:16:47.423 Replay Protected Memory Block: Not Supported 00:16:47.423 00:16:47.423 Firmware Slot Information 00:16:47.423 ========================= 00:16:47.423 Active slot: 1 00:16:47.423 Slot 1 Firmware Revision: 25.01 00:16:47.423 00:16:47.423 00:16:47.423 Commands Supported and Effects 00:16:47.423 ============================== 00:16:47.423 Admin Commands 00:16:47.423 -------------- 00:16:47.423 Get Log Page (02h): Supported 00:16:47.423 Identify (06h): Supported 00:16:47.423 Abort (08h): Supported 00:16:47.423 Set Features (09h): Supported 00:16:47.423 Get Features (0Ah): Supported 00:16:47.423 Asynchronous Event Request (0Ch): Supported 00:16:47.423 Keep Alive (18h): Supported 00:16:47.423 I/O Commands 00:16:47.423 ------------ 00:16:47.423 Flush (00h): Supported LBA-Change 00:16:47.423 Write (01h): Supported LBA-Change 00:16:47.423 Read (02h): Supported 00:16:47.423 Compare (05h): Supported 00:16:47.423 Write Zeroes (08h): Supported LBA-Change 00:16:47.423 Dataset Management (09h): Supported LBA-Change 00:16:47.423 Copy (19h): Supported LBA-Change 00:16:47.423 00:16:47.423 Error Log 00:16:47.423 ========= 00:16:47.423 00:16:47.423 Arbitration 00:16:47.423 =========== 00:16:47.423 Arbitration Burst: 1 00:16:47.423 00:16:47.423 Power Management 00:16:47.423 ================ 00:16:47.423 Number of Power States: 1 00:16:47.423 Current Power State: Power State #0 00:16:47.423 Power State #0: 00:16:47.423 Max Power: 0.00 W 00:16:47.423 Non-Operational State: Operational 00:16:47.423 Entry Latency: Not Reported 00:16:47.423 Exit Latency: Not Reported 00:16:47.423 Relative Read Throughput: 0 00:16:47.423 Relative Read Latency: 0 00:16:47.423 Relative Write Throughput: 0 00:16:47.423 Relative Write Latency: 0 00:16:47.423 Idle Power: Not Reported 00:16:47.423 Active Power: Not Reported 00:16:47.423 Non-Operational Permissive Mode: Not Supported 00:16:47.423 00:16:47.423 Health Information 00:16:47.423 ================== 00:16:47.423 Critical Warnings: 00:16:47.423 Available Spare Space: OK 00:16:47.423 Temperature: OK 00:16:47.423 Device Reliability: OK 00:16:47.423 Read Only: No 00:16:47.423 Volatile Memory Backup: OK 00:16:47.423 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:47.423 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:47.423 Available Spare: 0% 00:16:47.423 Available Sp[2024-10-07 09:37:47.033739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:47.423 [2024-10-07 09:37:47.033751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:47.423 [2024-10-07 09:37:47.033770] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:47.423 [2024-10-07 09:37:47.033776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.423 [2024-10-07 09:37:47.033781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.423 [2024-10-07 09:37:47.033785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.423 [2024-10-07 09:37:47.033790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.423 [2024-10-07 09:37:47.037623] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:47.423 [2024-10-07 09:37:47.037632] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:47.423 [2024-10-07 09:37:47.038092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:47.423 [2024-10-07 09:37:47.038129] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:47.423 [2024-10-07 09:37:47.038134] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:47.423 [2024-10-07 09:37:47.039101] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:47.423 [2024-10-07 09:37:47.039109] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:47.424 [2024-10-07 09:37:47.039159] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:47.424 [2024-10-07 09:37:47.040117] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:47.424 are Threshold: 0% 00:16:47.424 Life Percentage Used: 0% 00:16:47.424 Data Units Read: 0 00:16:47.424 Data Units Written: 0 00:16:47.424 Host Read Commands: 0 00:16:47.424 Host Write Commands: 0 00:16:47.424 Controller Busy Time: 0 minutes 00:16:47.424 Power Cycles: 0 00:16:47.424 Power On Hours: 0 hours 00:16:47.424 Unsafe Shutdowns: 0 00:16:47.424 Unrecoverable Media Errors: 0 00:16:47.424 Lifetime Error Log Entries: 0 00:16:47.424 Warning Temperature Time: 0 minutes 00:16:47.424 Critical Temperature Time: 0 minutes 00:16:47.424 00:16:47.424 Number of Queues 00:16:47.424 ================ 00:16:47.424 Number of I/O Submission Queues: 127 00:16:47.424 Number of I/O Completion Queues: 127 00:16:47.424 00:16:47.424 Active Namespaces 00:16:47.424 ================= 00:16:47.424 Namespace ID:1 00:16:47.424 Error Recovery Timeout: Unlimited 00:16:47.424 Command Set Identifier: NVM (00h) 00:16:47.424 Deallocate: Supported 00:16:47.424 Deallocated/Unwritten Error: Not Supported 00:16:47.424 Deallocated Read Value: Unknown 00:16:47.424 Deallocate in Write Zeroes: Not Supported 00:16:47.424 Deallocated Guard Field: 0xFFFF 00:16:47.424 Flush: Supported 00:16:47.424 Reservation: Supported 00:16:47.424 Namespace Sharing Capabilities: Multiple Controllers 00:16:47.424 Size (in LBAs): 131072 (0GiB) 00:16:47.424 Capacity (in LBAs): 131072 (0GiB) 00:16:47.424 Utilization (in LBAs): 131072 (0GiB) 00:16:47.424 NGUID: C249F6752468476490C04B146C5544AD 00:16:47.424 UUID: c249f675-2468-4764-90c0-4b146c5544ad 00:16:47.424 Thin Provisioning: Not Supported 00:16:47.424 Per-NS Atomic Units: Yes 00:16:47.424 Atomic Boundary Size (Normal): 0 00:16:47.424 Atomic Boundary Size (PFail): 0 00:16:47.424 Atomic Boundary Offset: 0 00:16:47.424 Maximum Single Source Range Length: 65535 00:16:47.424 Maximum Copy Length: 65535 00:16:47.424 Maximum Source Range Count: 1 00:16:47.424 NGUID/EUI64 Never Reused: No 00:16:47.424 Namespace Write Protected: No 00:16:47.424 Number of LBA Formats: 1 00:16:47.424 Current LBA Format: LBA Format #00 00:16:47.424 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:47.424 00:16:47.424 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:47.684 [2024-10-07 09:37:47.216280] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:52.981 Initializing NVMe Controllers 00:16:52.981 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:52.981 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:52.981 Initialization complete. Launching workers. 00:16:52.981 ======================================================== 00:16:52.981 Latency(us) 00:16:52.981 Device Information : IOPS MiB/s Average min max 00:16:52.981 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39971.59 156.14 3201.95 845.25 7766.42 00:16:52.981 ======================================================== 00:16:52.981 Total : 39971.59 156.14 3201.95 845.25 7766.42 00:16:52.981 00:16:52.981 [2024-10-07 09:37:52.232988] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:52.981 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:52.981 [2024-10-07 09:37:52.411811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:58.272 Initializing NVMe Controllers 00:16:58.272 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:58.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:58.272 Initialization complete. Launching workers. 00:16:58.272 ======================================================== 00:16:58.272 Latency(us) 00:16:58.272 Device Information : IOPS MiB/s Average min max 00:16:58.272 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.74 7628.02 8023.89 00:16:58.272 ======================================================== 00:16:58.272 Total : 16051.20 62.70 7980.74 7628.02 8023.89 00:16:58.272 00:16:58.272 [2024-10-07 09:37:57.448599] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:58.272 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:58.272 [2024-10-07 09:37:57.638404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:03.561 [2024-10-07 09:38:02.703789] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:03.561 Initializing NVMe Controllers 00:17:03.561 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:03.561 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:03.561 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:03.561 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:03.561 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:03.561 Initialization complete. Launching workers. 00:17:03.561 Starting thread on core 2 00:17:03.561 Starting thread on core 3 00:17:03.561 Starting thread on core 1 00:17:03.561 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:03.561 [2024-10-07 09:38:02.946610] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:06.986 [2024-10-07 09:38:06.016593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:06.986 Initializing NVMe Controllers 00:17:06.986 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:06.986 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:06.986 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:06.986 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:06.986 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:06.986 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:06.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:06.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:06.986 Initialization complete. Launching workers. 00:17:06.986 Starting thread on core 1 with urgent priority queue 00:17:06.986 Starting thread on core 2 with urgent priority queue 00:17:06.986 Starting thread on core 3 with urgent priority queue 00:17:06.986 Starting thread on core 0 with urgent priority queue 00:17:06.986 SPDK bdev Controller (SPDK1 ) core 0: 11022.00 IO/s 9.07 secs/100000 ios 00:17:06.986 SPDK bdev Controller (SPDK1 ) core 1: 8494.33 IO/s 11.77 secs/100000 ios 00:17:06.986 SPDK bdev Controller (SPDK1 ) core 2: 12703.33 IO/s 7.87 secs/100000 ios 00:17:06.986 SPDK bdev Controller (SPDK1 ) core 3: 7493.33 IO/s 13.35 secs/100000 ios 00:17:06.986 ======================================================== 00:17:06.986 00:17:06.986 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:06.986 [2024-10-07 09:38:06.242049] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:06.986 Initializing NVMe Controllers 00:17:06.986 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:06.986 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:06.986 Namespace ID: 1 size: 0GB 00:17:06.986 Initialization complete. 00:17:06.986 INFO: using host memory buffer for IO 00:17:06.986 Hello world! 00:17:06.986 [2024-10-07 09:38:06.276256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:06.986 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:06.986 [2024-10-07 09:38:06.509146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:07.927 Initializing NVMe Controllers 00:17:07.927 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:07.927 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:07.927 Initialization complete. Launching workers. 00:17:07.927 submit (in ns) avg, min, max = 6282.7, 2832.5, 4000192.5 00:17:07.927 complete (in ns) avg, min, max = 15713.6, 1624.2, 3998960.0 00:17:07.927 00:17:07.927 Submit histogram 00:17:07.927 ================ 00:17:07.927 Range in us Cumulative Count 00:17:07.927 2.827 - 2.840: 0.1604% ( 33) 00:17:07.927 2.840 - 2.853: 0.6269% ( 96) 00:17:07.927 2.853 - 2.867: 2.6584% ( 418) 00:17:07.927 2.867 - 2.880: 6.5465% ( 800) 00:17:07.927 2.880 - 2.893: 10.6192% ( 838) 00:17:07.927 2.893 - 2.907: 15.5424% ( 1013) 00:17:07.927 2.907 - 2.920: 21.7000% ( 1267) 00:17:07.927 2.920 - 2.933: 27.5418% ( 1202) 00:17:07.927 2.933 - 2.947: 33.1794% ( 1160) 00:17:07.927 2.947 - 2.960: 39.0455% ( 1207) 00:17:07.927 2.960 - 2.973: 45.6308% ( 1355) 00:17:07.927 2.973 - 2.987: 52.7070% ( 1456) 00:17:07.927 2.987 - 3.000: 60.6435% ( 1633) 00:17:07.927 3.000 - 3.013: 69.9601% ( 1917) 00:17:07.927 3.013 - 3.027: 79.4761% ( 1958) 00:17:07.927 3.027 - 3.040: 87.3299% ( 1616) 00:17:07.927 3.040 - 3.053: 92.6905% ( 1103) 00:17:07.927 3.053 - 3.067: 95.7232% ( 624) 00:17:07.927 3.067 - 3.080: 97.5457% ( 375) 00:17:07.927 3.080 - 3.093: 98.5566% ( 208) 00:17:07.927 3.093 - 3.107: 99.1155% ( 115) 00:17:07.927 3.107 - 3.120: 99.3925% ( 57) 00:17:07.927 3.120 - 3.133: 99.5286% ( 28) 00:17:07.927 3.133 - 3.147: 99.5675% ( 8) 00:17:07.927 3.147 - 3.160: 99.5772% ( 2) 00:17:07.927 3.160 - 3.173: 99.5820% ( 1) 00:17:07.927 3.280 - 3.293: 99.5869% ( 1) 00:17:07.927 3.333 - 3.347: 99.5918% ( 1) 00:17:07.927 3.360 - 3.373: 99.5966% ( 1) 00:17:07.927 3.440 - 3.467: 99.6015% ( 1) 00:17:07.927 3.467 - 3.493: 99.6063% ( 1) 00:17:07.927 3.573 - 3.600: 99.6112% ( 1) 00:17:07.927 3.627 - 3.653: 99.6161% ( 1) 00:17:07.927 3.653 - 3.680: 99.6209% ( 1) 00:17:07.927 3.813 - 3.840: 99.6258% ( 1) 00:17:07.927 3.840 - 3.867: 99.6306% ( 1) 00:17:07.927 3.867 - 3.893: 99.6355% ( 1) 00:17:07.927 4.053 - 4.080: 99.6404% ( 1) 00:17:07.927 4.160 - 4.187: 99.6452% ( 1) 00:17:07.927 4.213 - 4.240: 99.6501% ( 1) 00:17:07.927 4.240 - 4.267: 99.6549% ( 1) 00:17:07.927 4.320 - 4.347: 99.6598% ( 1) 00:17:07.927 4.560 - 4.587: 99.6647% ( 1) 00:17:07.927 4.587 - 4.613: 99.6695% ( 1) 00:17:07.927 4.640 - 4.667: 99.6792% ( 2) 00:17:07.927 4.747 - 4.773: 99.6841% ( 1) 00:17:07.927 4.800 - 4.827: 99.6938% ( 2) 00:17:07.927 4.827 - 4.853: 99.6987% ( 1) 00:17:07.927 4.880 - 4.907: 99.7035% ( 1) 00:17:07.927 4.933 - 4.960: 99.7084% ( 1) 00:17:07.927 5.013 - 5.040: 99.7133% ( 1) 00:17:07.927 5.040 - 5.067: 99.7230% ( 2) 00:17:07.927 5.173 - 5.200: 99.7327% ( 2) 00:17:07.927 5.227 - 5.253: 99.7376% ( 1) 00:17:07.927 5.253 - 5.280: 99.7424% ( 1) 00:17:07.927 5.280 - 5.307: 99.7473% ( 1) 00:17:07.927 5.333 - 5.360: 99.7521% ( 1) 00:17:07.927 5.413 - 5.440: 99.7570% ( 1) 00:17:07.927 5.547 - 5.573: 99.7667% ( 2) 00:17:07.927 5.733 - 5.760: 99.7716% ( 1) 00:17:07.927 5.813 - 5.840: 99.7764% ( 1) 00:17:07.927 5.840 - 5.867: 99.7813% ( 1) 00:17:07.927 5.947 - 5.973: 99.7862% ( 1) 00:17:07.927 6.027 - 6.053: 99.7959% ( 2) 00:17:07.927 6.187 - 6.213: 99.8007% ( 1) 00:17:07.927 6.240 - 6.267: 99.8056% ( 1) 00:17:07.927 6.320 - 6.347: 99.8105% ( 1) 00:17:07.927 6.347 - 6.373: 99.8153% ( 1) 00:17:07.927 6.373 - 6.400: 99.8348% ( 4) 00:17:07.927 6.427 - 6.453: 99.8445% ( 2) 00:17:07.927 6.533 - 6.560: 99.8493% ( 1) 00:17:07.927 6.560 - 6.587: 99.8542% ( 1) 00:17:07.927 6.613 - 6.640: 99.8591% ( 1) 00:17:07.927 6.747 - 6.773: 99.8639% ( 1) 00:17:07.927 6.827 - 6.880: 99.8736% ( 2) 00:17:07.927 6.933 - 6.987: 99.8785% ( 1) 00:17:07.927 6.987 - 7.040: 99.8882% ( 2) 00:17:07.927 7.040 - 7.093: 99.8931% ( 1) 00:17:07.927 7.093 - 7.147: 99.8979% ( 1) 00:17:07.927 [2024-10-07 09:38:07.527656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:07.927 7.413 - 7.467: 99.9028% ( 1) 00:17:07.927 7.680 - 7.733: 99.9077% ( 1) 00:17:07.927 9.333 - 9.387: 99.9125% ( 1) 00:17:07.927 10.773 - 10.827: 99.9174% ( 1) 00:17:07.927 3986.773 - 4014.080: 100.0000% ( 17) 00:17:07.927 00:17:07.927 Complete histogram 00:17:07.927 ================== 00:17:07.927 Range in us Cumulative Count 00:17:07.927 1.620 - 1.627: 0.0049% ( 1) 00:17:07.927 1.633 - 1.640: 0.3596% ( 73) 00:17:07.927 1.640 - 1.647: 1.0498% ( 142) 00:17:07.927 1.647 - 1.653: 1.1324% ( 17) 00:17:07.927 1.653 - 1.660: 1.2636% ( 27) 00:17:07.927 1.660 - 1.667: 1.3414% ( 16) 00:17:07.927 1.667 - 1.673: 1.3754% ( 7) 00:17:07.927 1.673 - 1.680: 1.3851% ( 2) 00:17:07.927 1.687 - 1.693: 1.4143% ( 6) 00:17:07.927 1.693 - 1.700: 14.1087% ( 2612) 00:17:07.927 1.700 - 1.707: 37.6215% ( 4838) 00:17:07.927 1.707 - 1.720: 63.2193% ( 5267) 00:17:07.927 1.720 - 1.733: 78.0375% ( 3049) 00:17:07.927 1.733 - 1.747: 82.7858% ( 977) 00:17:07.927 1.747 - 1.760: 84.3799% ( 328) 00:17:07.927 1.760 - 1.773: 88.8948% ( 929) 00:17:07.927 1.773 - 1.787: 93.9395% ( 1038) 00:17:07.927 1.787 - 1.800: 97.3513% ( 702) 00:17:07.927 1.800 - 1.813: 98.8336% ( 305) 00:17:07.927 1.813 - 1.827: 99.2710% ( 90) 00:17:07.927 1.827 - 1.840: 99.3925% ( 25) 00:17:07.927 1.840 - 1.853: 99.4119% ( 4) 00:17:07.927 1.853 - 1.867: 99.4168% ( 1) 00:17:07.927 1.867 - 1.880: 99.4217% ( 1) 00:17:07.927 1.947 - 1.960: 99.4265% ( 1) 00:17:07.927 2.013 - 2.027: 99.4314% ( 1) 00:17:07.927 2.080 - 2.093: 99.4362% ( 1) 00:17:07.927 2.093 - 2.107: 99.4411% ( 1) 00:17:07.927 2.173 - 2.187: 99.4460% ( 1) 00:17:07.928 3.333 - 3.347: 99.4508% ( 1) 00:17:07.928 3.413 - 3.440: 99.4557% ( 1) 00:17:07.928 3.547 - 3.573: 99.4605% ( 1) 00:17:07.928 3.600 - 3.627: 99.4703% ( 2) 00:17:07.928 3.653 - 3.680: 99.4751% ( 1) 00:17:07.928 3.920 - 3.947: 99.4800% ( 1) 00:17:07.928 3.947 - 3.973: 99.4848% ( 1) 00:17:07.928 3.973 - 4.000: 99.4897% ( 1) 00:17:07.928 4.000 - 4.027: 99.4946% ( 1) 00:17:07.928 4.373 - 4.400: 99.4994% ( 1) 00:17:07.928 4.400 - 4.427: 99.5043% ( 1) 00:17:07.928 4.533 - 4.560: 99.5091% ( 1) 00:17:07.928 4.613 - 4.640: 99.5140% ( 1) 00:17:07.928 4.693 - 4.720: 99.5237% ( 2) 00:17:07.928 4.720 - 4.747: 99.5286% ( 1) 00:17:07.928 4.800 - 4.827: 99.5334% ( 1) 00:17:07.928 4.880 - 4.907: 99.5383% ( 1) 00:17:07.928 4.960 - 4.987: 99.5432% ( 1) 00:17:07.928 5.013 - 5.040: 99.5529% ( 2) 00:17:07.928 5.040 - 5.067: 99.5675% ( 3) 00:17:07.928 5.093 - 5.120: 99.5723% ( 1) 00:17:07.928 5.280 - 5.307: 99.5772% ( 1) 00:17:07.928 5.333 - 5.360: 99.5820% ( 1) 00:17:07.928 5.360 - 5.387: 99.5869% ( 1) 00:17:07.928 5.387 - 5.413: 99.5918% ( 1) 00:17:07.928 5.547 - 5.573: 99.5966% ( 1) 00:17:07.928 5.653 - 5.680: 99.6015% ( 1) 00:17:07.928 5.760 - 5.787: 99.6063% ( 1) 00:17:07.928 5.813 - 5.840: 99.6112% ( 1) 00:17:07.928 5.840 - 5.867: 99.6161% ( 1) 00:17:07.928 5.947 - 5.973: 99.6209% ( 1) 00:17:07.928 6.027 - 6.053: 99.6258% ( 1) 00:17:07.928 6.053 - 6.080: 99.6306% ( 1) 00:17:07.928 7.307 - 7.360: 99.6355% ( 1) 00:17:07.928 8.747 - 8.800: 99.6404% ( 1) 00:17:07.928 11.147 - 11.200: 99.6452% ( 1) 00:17:07.928 145.920 - 146.773: 99.6501% ( 1) 00:17:07.928 3986.773 - 4014.080: 100.0000% ( 72) 00:17:07.928 00:17:07.928 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:07.928 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:07.928 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:07.928 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:07.928 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:08.188 [ 00:17:08.188 { 00:17:08.188 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:08.188 "subtype": "Discovery", 00:17:08.188 "listen_addresses": [], 00:17:08.188 "allow_any_host": true, 00:17:08.188 "hosts": [] 00:17:08.188 }, 00:17:08.188 { 00:17:08.188 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:08.188 "subtype": "NVMe", 00:17:08.188 "listen_addresses": [ 00:17:08.188 { 00:17:08.188 "trtype": "VFIOUSER", 00:17:08.188 "adrfam": "IPv4", 00:17:08.188 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:08.188 "trsvcid": "0" 00:17:08.188 } 00:17:08.188 ], 00:17:08.188 "allow_any_host": true, 00:17:08.188 "hosts": [], 00:17:08.188 "serial_number": "SPDK1", 00:17:08.188 "model_number": "SPDK bdev Controller", 00:17:08.188 "max_namespaces": 32, 00:17:08.188 "min_cntlid": 1, 00:17:08.188 "max_cntlid": 65519, 00:17:08.188 "namespaces": [ 00:17:08.188 { 00:17:08.188 "nsid": 1, 00:17:08.188 "bdev_name": "Malloc1", 00:17:08.188 "name": "Malloc1", 00:17:08.188 "nguid": "C249F6752468476490C04B146C5544AD", 00:17:08.188 "uuid": "c249f675-2468-4764-90c0-4b146c5544ad" 00:17:08.188 } 00:17:08.188 ] 00:17:08.188 }, 00:17:08.188 { 00:17:08.188 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:08.188 "subtype": "NVMe", 00:17:08.188 "listen_addresses": [ 00:17:08.188 { 00:17:08.188 "trtype": "VFIOUSER", 00:17:08.188 "adrfam": "IPv4", 00:17:08.188 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:08.188 "trsvcid": "0" 00:17:08.188 } 00:17:08.188 ], 00:17:08.188 "allow_any_host": true, 00:17:08.188 "hosts": [], 00:17:08.188 "serial_number": "SPDK2", 00:17:08.188 "model_number": "SPDK bdev Controller", 00:17:08.188 "max_namespaces": 32, 00:17:08.188 "min_cntlid": 1, 00:17:08.188 "max_cntlid": 65519, 00:17:08.188 "namespaces": [ 00:17:08.188 { 00:17:08.188 "nsid": 1, 00:17:08.188 "bdev_name": "Malloc2", 00:17:08.188 "name": "Malloc2", 00:17:08.188 "nguid": "FF9EE0CF28124A47B3FBD0A99B2E5747", 00:17:08.188 "uuid": "ff9ee0cf-2812-4a47-b3fb-d0a99b2e5747" 00:17:08.188 } 00:17:08.188 ] 00:17:08.188 } 00:17:08.188 ] 00:17:08.188 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:08.188 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3318628 00:17:08.188 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:08.188 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:08.188 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- sync/functions.sh@10 -- # local i=0 00:17:08.188 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- sync/functions.sh@11 -- # [[ ! -e /tmp/aer_touch_file ]] 00:17:08.188 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- sync/functions.sh@15 -- # [[ ! -e /tmp/aer_touch_file ]] 00:17:08.188 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- sync/functions.sh@19 -- # return 0 00:17:08.188 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:08.188 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:08.449 [2024-10-07 09:38:07.901005] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:08.449 Malloc3 00:17:08.449 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:08.710 [2024-10-07 09:38:08.111422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:08.710 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:08.710 Asynchronous Event Request test 00:17:08.710 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:08.710 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:08.710 Registering asynchronous event callbacks... 00:17:08.710 Starting namespace attribute notice tests for all controllers... 00:17:08.710 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:08.710 aer_cb - Changed Namespace 00:17:08.710 Cleaning up... 00:17:08.710 [ 00:17:08.710 { 00:17:08.710 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:08.710 "subtype": "Discovery", 00:17:08.710 "listen_addresses": [], 00:17:08.710 "allow_any_host": true, 00:17:08.710 "hosts": [] 00:17:08.710 }, 00:17:08.710 { 00:17:08.710 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:08.710 "subtype": "NVMe", 00:17:08.710 "listen_addresses": [ 00:17:08.710 { 00:17:08.710 "trtype": "VFIOUSER", 00:17:08.710 "adrfam": "IPv4", 00:17:08.710 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:08.710 "trsvcid": "0" 00:17:08.710 } 00:17:08.710 ], 00:17:08.710 "allow_any_host": true, 00:17:08.710 "hosts": [], 00:17:08.710 "serial_number": "SPDK1", 00:17:08.710 "model_number": "SPDK bdev Controller", 00:17:08.710 "max_namespaces": 32, 00:17:08.710 "min_cntlid": 1, 00:17:08.710 "max_cntlid": 65519, 00:17:08.710 "namespaces": [ 00:17:08.710 { 00:17:08.710 "nsid": 1, 00:17:08.710 "bdev_name": "Malloc1", 00:17:08.710 "name": "Malloc1", 00:17:08.710 "nguid": "C249F6752468476490C04B146C5544AD", 00:17:08.710 "uuid": "c249f675-2468-4764-90c0-4b146c5544ad" 00:17:08.710 }, 00:17:08.710 { 00:17:08.710 "nsid": 2, 00:17:08.710 "bdev_name": "Malloc3", 00:17:08.710 "name": "Malloc3", 00:17:08.710 "nguid": "FC6D544F76E340ED83AEC71016EA3454", 00:17:08.710 "uuid": "fc6d544f-76e3-40ed-83ae-c71016ea3454" 00:17:08.710 } 00:17:08.710 ] 00:17:08.710 }, 00:17:08.710 { 00:17:08.710 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:08.710 "subtype": "NVMe", 00:17:08.710 "listen_addresses": [ 00:17:08.710 { 00:17:08.710 "trtype": "VFIOUSER", 00:17:08.710 "adrfam": "IPv4", 00:17:08.710 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:08.710 "trsvcid": "0" 00:17:08.710 } 00:17:08.710 ], 00:17:08.710 "allow_any_host": true, 00:17:08.710 "hosts": [], 00:17:08.710 "serial_number": "SPDK2", 00:17:08.710 "model_number": "SPDK bdev Controller", 00:17:08.710 "max_namespaces": 32, 00:17:08.710 "min_cntlid": 1, 00:17:08.710 "max_cntlid": 65519, 00:17:08.710 "namespaces": [ 00:17:08.710 { 00:17:08.710 "nsid": 1, 00:17:08.710 "bdev_name": "Malloc2", 00:17:08.710 "name": "Malloc2", 00:17:08.710 "nguid": "FF9EE0CF28124A47B3FBD0A99B2E5747", 00:17:08.710 "uuid": "ff9ee0cf-2812-4a47-b3fb-d0a99b2e5747" 00:17:08.710 } 00:17:08.710 ] 00:17:08.710 } 00:17:08.710 ] 00:17:08.710 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3318628 00:17:08.710 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:08.710 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:08.710 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:08.710 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:08.710 [2024-10-07 09:38:08.342409] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:08.710 [2024-10-07 09:38:08.342455] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318660 ] 00:17:08.710 [2024-10-07 09:38:08.370655] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:08.972 [2024-10-07 09:38:08.374383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:08.972 [2024-10-07 09:38:08.374401] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdee24e8000 00:17:08.972 [2024-10-07 09:38:08.375384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:08.972 [2024-10-07 09:38:08.376386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:08.972 [2024-10-07 09:38:08.377392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:08.972 [2024-10-07 09:38:08.378402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:08.972 [2024-10-07 09:38:08.379406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:08.972 [2024-10-07 09:38:08.380418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:08.972 [2024-10-07 09:38:08.381419] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:08.972 [2024-10-07 09:38:08.382427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:08.972 [2024-10-07 09:38:08.383433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:08.972 [2024-10-07 09:38:08.383440] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdee24dd000 00:17:08.972 [2024-10-07 09:38:08.384352] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:08.972 [2024-10-07 09:38:08.395736] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:08.972 [2024-10-07 09:38:08.395754] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:08.972 [2024-10-07 09:38:08.397794] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:08.972 [2024-10-07 09:38:08.397827] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:08.972 [2024-10-07 09:38:08.397883] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:08.972 [2024-10-07 09:38:08.397896] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:08.972 [2024-10-07 09:38:08.397900] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:08.972 [2024-10-07 09:38:08.398800] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:08.972 [2024-10-07 09:38:08.398807] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:08.972 [2024-10-07 09:38:08.398812] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:08.972 [2024-10-07 09:38:08.399805] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:08.972 [2024-10-07 09:38:08.399811] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:08.972 [2024-10-07 09:38:08.399820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:08.972 [2024-10-07 09:38:08.400812] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:08.972 [2024-10-07 09:38:08.400818] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:08.972 [2024-10-07 09:38:08.401819] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:08.972 [2024-10-07 09:38:08.401825] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:08.972 [2024-10-07 09:38:08.401829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:08.972 [2024-10-07 09:38:08.401834] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:08.972 [2024-10-07 09:38:08.401938] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:08.972 [2024-10-07 09:38:08.401941] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:08.972 [2024-10-07 09:38:08.401945] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:08.972 [2024-10-07 09:38:08.406621] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:08.972 [2024-10-07 09:38:08.406851] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:08.972 [2024-10-07 09:38:08.407864] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:08.972 [2024-10-07 09:38:08.408868] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:08.972 [2024-10-07 09:38:08.408899] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:08.972 [2024-10-07 09:38:08.409877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:08.972 [2024-10-07 09:38:08.409883] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:08.973 [2024-10-07 09:38:08.409887] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.409902] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:08.973 [2024-10-07 09:38:08.409910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.409919] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:08.973 [2024-10-07 09:38:08.409922] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:08.973 [2024-10-07 09:38:08.409925] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:08.973 [2024-10-07 09:38:08.409934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.417622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.417632] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:08.973 [2024-10-07 09:38:08.417636] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:08.973 [2024-10-07 09:38:08.417639] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:08.973 [2024-10-07 09:38:08.417642] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:08.973 [2024-10-07 09:38:08.417645] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:08.973 [2024-10-07 09:38:08.417649] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:08.973 [2024-10-07 09:38:08.417652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.417658] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.417665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.425622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.425632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.973 [2024-10-07 09:38:08.425639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.973 [2024-10-07 09:38:08.425645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.973 [2024-10-07 09:38:08.425651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.973 [2024-10-07 09:38:08.425654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.425662] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.425669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.433622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.433628] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:08.973 [2024-10-07 09:38:08.433631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.433636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.433642] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.433648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.441622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.441668] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.441676] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.441681] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:08.973 [2024-10-07 09:38:08.441685] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:08.973 [2024-10-07 09:38:08.441687] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:08.973 [2024-10-07 09:38:08.441692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.449621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.449629] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:08.973 [2024-10-07 09:38:08.449635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.449641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.449646] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:08.973 [2024-10-07 09:38:08.449649] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:08.973 [2024-10-07 09:38:08.449651] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:08.973 [2024-10-07 09:38:08.449656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.457620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.457631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.457637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.457642] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:08.973 [2024-10-07 09:38:08.457645] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:08.973 [2024-10-07 09:38:08.457648] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:08.973 [2024-10-07 09:38:08.457652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.465620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.465627] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.465632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.465638] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.465642] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.465646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.465651] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.465654] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:08.973 [2024-10-07 09:38:08.465657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:08.973 [2024-10-07 09:38:08.465661] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:08.973 [2024-10-07 09:38:08.465673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.473620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.473630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.481619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.481629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.489621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.489630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:08.973 [2024-10-07 09:38:08.497621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:08.973 [2024-10-07 09:38:08.497634] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:08.973 [2024-10-07 09:38:08.497638] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:08.973 [2024-10-07 09:38:08.497641] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:08.973 [2024-10-07 09:38:08.497643] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:08.973 [2024-10-07 09:38:08.497645] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:08.973 [2024-10-07 09:38:08.497650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:08.973 [2024-10-07 09:38:08.497655] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:08.973 [2024-10-07 09:38:08.497658] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:08.974 [2024-10-07 09:38:08.497661] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:08.974 [2024-10-07 09:38:08.497665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:08.974 [2024-10-07 09:38:08.497670] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:08.974 [2024-10-07 09:38:08.497673] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:08.974 [2024-10-07 09:38:08.497676] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:08.974 [2024-10-07 09:38:08.497680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:08.974 [2024-10-07 09:38:08.497686] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:08.974 [2024-10-07 09:38:08.497689] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:08.974 [2024-10-07 09:38:08.497693] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:08.974 [2024-10-07 09:38:08.497697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:08.974 [2024-10-07 09:38:08.505620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:08.974 [2024-10-07 09:38:08.505632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:08.974 [2024-10-07 09:38:08.505640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:08.974 [2024-10-07 09:38:08.505645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:08.974 ===================================================== 00:17:08.974 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:08.974 ===================================================== 00:17:08.974 Controller Capabilities/Features 00:17:08.974 ================================ 00:17:08.974 Vendor ID: 4e58 00:17:08.974 Subsystem Vendor ID: 4e58 00:17:08.974 Serial Number: SPDK2 00:17:08.974 Model Number: SPDK bdev Controller 00:17:08.974 Firmware Version: 25.01 00:17:08.974 Recommended Arb Burst: 6 00:17:08.974 IEEE OUI Identifier: 8d 6b 50 00:17:08.974 Multi-path I/O 00:17:08.974 May have multiple subsystem ports: Yes 00:17:08.974 May have multiple controllers: Yes 00:17:08.974 Associated with SR-IOV VF: No 00:17:08.974 Max Data Transfer Size: 131072 00:17:08.974 Max Number of Namespaces: 32 00:17:08.974 Max Number of I/O Queues: 127 00:17:08.974 NVMe Specification Version (VS): 1.3 00:17:08.974 NVMe Specification Version (Identify): 1.3 00:17:08.974 Maximum Queue Entries: 256 00:17:08.974 Contiguous Queues Required: Yes 00:17:08.974 Arbitration Mechanisms Supported 00:17:08.974 Weighted Round Robin: Not Supported 00:17:08.974 Vendor Specific: Not Supported 00:17:08.974 Reset Timeout: 15000 ms 00:17:08.974 Doorbell Stride: 4 bytes 00:17:08.974 NVM Subsystem Reset: Not Supported 00:17:08.974 Command Sets Supported 00:17:08.974 NVM Command Set: Supported 00:17:08.974 Boot Partition: Not Supported 00:17:08.974 Memory Page Size Minimum: 4096 bytes 00:17:08.974 Memory Page Size Maximum: 4096 bytes 00:17:08.974 Persistent Memory Region: Not Supported 00:17:08.974 Optional Asynchronous Events Supported 00:17:08.974 Namespace Attribute Notices: Supported 00:17:08.974 Firmware Activation Notices: Not Supported 00:17:08.974 ANA Change Notices: Not Supported 00:17:08.974 PLE Aggregate Log Change Notices: Not Supported 00:17:08.974 LBA Status Info Alert Notices: Not Supported 00:17:08.974 EGE Aggregate Log Change Notices: Not Supported 00:17:08.974 Normal NVM Subsystem Shutdown event: Not Supported 00:17:08.974 Zone Descriptor Change Notices: Not Supported 00:17:08.974 Discovery Log Change Notices: Not Supported 00:17:08.974 Controller Attributes 00:17:08.974 128-bit Host Identifier: Supported 00:17:08.974 Non-Operational Permissive Mode: Not Supported 00:17:08.974 NVM Sets: Not Supported 00:17:08.974 Read Recovery Levels: Not Supported 00:17:08.974 Endurance Groups: Not Supported 00:17:08.974 Predictable Latency Mode: Not Supported 00:17:08.974 Traffic Based Keep ALive: Not Supported 00:17:08.974 Namespace Granularity: Not Supported 00:17:08.974 SQ Associations: Not Supported 00:17:08.974 UUID List: Not Supported 00:17:08.974 Multi-Domain Subsystem: Not Supported 00:17:08.974 Fixed Capacity Management: Not Supported 00:17:08.974 Variable Capacity Management: Not Supported 00:17:08.974 Delete Endurance Group: Not Supported 00:17:08.974 Delete NVM Set: Not Supported 00:17:08.974 Extended LBA Formats Supported: Not Supported 00:17:08.974 Flexible Data Placement Supported: Not Supported 00:17:08.974 00:17:08.974 Controller Memory Buffer Support 00:17:08.974 ================================ 00:17:08.974 Supported: No 00:17:08.974 00:17:08.974 Persistent Memory Region Support 00:17:08.974 ================================ 00:17:08.974 Supported: No 00:17:08.974 00:17:08.974 Admin Command Set Attributes 00:17:08.974 ============================ 00:17:08.974 Security Send/Receive: Not Supported 00:17:08.974 Format NVM: Not Supported 00:17:08.974 Firmware Activate/Download: Not Supported 00:17:08.974 Namespace Management: Not Supported 00:17:08.974 Device Self-Test: Not Supported 00:17:08.974 Directives: Not Supported 00:17:08.974 NVMe-MI: Not Supported 00:17:08.974 Virtualization Management: Not Supported 00:17:08.974 Doorbell Buffer Config: Not Supported 00:17:08.974 Get LBA Status Capability: Not Supported 00:17:08.974 Command & Feature Lockdown Capability: Not Supported 00:17:08.974 Abort Command Limit: 4 00:17:08.974 Async Event Request Limit: 4 00:17:08.974 Number of Firmware Slots: N/A 00:17:08.974 Firmware Slot 1 Read-Only: N/A 00:17:08.974 Firmware Activation Without Reset: N/A 00:17:08.974 Multiple Update Detection Support: N/A 00:17:08.974 Firmware Update Granularity: No Information Provided 00:17:08.974 Per-Namespace SMART Log: No 00:17:08.974 Asymmetric Namespace Access Log Page: Not Supported 00:17:08.974 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:08.974 Command Effects Log Page: Supported 00:17:08.974 Get Log Page Extended Data: Supported 00:17:08.974 Telemetry Log Pages: Not Supported 00:17:08.974 Persistent Event Log Pages: Not Supported 00:17:08.974 Supported Log Pages Log Page: May Support 00:17:08.974 Commands Supported & Effects Log Page: Not Supported 00:17:08.974 Feature Identifiers & Effects Log Page:May Support 00:17:08.974 NVMe-MI Commands & Effects Log Page: May Support 00:17:08.974 Data Area 4 for Telemetry Log: Not Supported 00:17:08.974 Error Log Page Entries Supported: 128 00:17:08.974 Keep Alive: Supported 00:17:08.974 Keep Alive Granularity: 10000 ms 00:17:08.974 00:17:08.974 NVM Command Set Attributes 00:17:08.974 ========================== 00:17:08.974 Submission Queue Entry Size 00:17:08.974 Max: 64 00:17:08.974 Min: 64 00:17:08.974 Completion Queue Entry Size 00:17:08.974 Max: 16 00:17:08.974 Min: 16 00:17:08.974 Number of Namespaces: 32 00:17:08.974 Compare Command: Supported 00:17:08.974 Write Uncorrectable Command: Not Supported 00:17:08.974 Dataset Management Command: Supported 00:17:08.974 Write Zeroes Command: Supported 00:17:08.974 Set Features Save Field: Not Supported 00:17:08.974 Reservations: Not Supported 00:17:08.974 Timestamp: Not Supported 00:17:08.974 Copy: Supported 00:17:08.974 Volatile Write Cache: Present 00:17:08.974 Atomic Write Unit (Normal): 1 00:17:08.974 Atomic Write Unit (PFail): 1 00:17:08.974 Atomic Compare & Write Unit: 1 00:17:08.974 Fused Compare & Write: Supported 00:17:08.974 Scatter-Gather List 00:17:08.974 SGL Command Set: Supported (Dword aligned) 00:17:08.974 SGL Keyed: Not Supported 00:17:08.974 SGL Bit Bucket Descriptor: Not Supported 00:17:08.974 SGL Metadata Pointer: Not Supported 00:17:08.974 Oversized SGL: Not Supported 00:17:08.974 SGL Metadata Address: Not Supported 00:17:08.974 SGL Offset: Not Supported 00:17:08.974 Transport SGL Data Block: Not Supported 00:17:08.974 Replay Protected Memory Block: Not Supported 00:17:08.974 00:17:08.974 Firmware Slot Information 00:17:08.974 ========================= 00:17:08.974 Active slot: 1 00:17:08.974 Slot 1 Firmware Revision: 25.01 00:17:08.974 00:17:08.974 00:17:08.974 Commands Supported and Effects 00:17:08.974 ============================== 00:17:08.974 Admin Commands 00:17:08.974 -------------- 00:17:08.974 Get Log Page (02h): Supported 00:17:08.974 Identify (06h): Supported 00:17:08.974 Abort (08h): Supported 00:17:08.974 Set Features (09h): Supported 00:17:08.974 Get Features (0Ah): Supported 00:17:08.974 Asynchronous Event Request (0Ch): Supported 00:17:08.974 Keep Alive (18h): Supported 00:17:08.974 I/O Commands 00:17:08.975 ------------ 00:17:08.975 Flush (00h): Supported LBA-Change 00:17:08.975 Write (01h): Supported LBA-Change 00:17:08.975 Read (02h): Supported 00:17:08.975 Compare (05h): Supported 00:17:08.975 Write Zeroes (08h): Supported LBA-Change 00:17:08.975 Dataset Management (09h): Supported LBA-Change 00:17:08.975 Copy (19h): Supported LBA-Change 00:17:08.975 00:17:08.975 Error Log 00:17:08.975 ========= 00:17:08.975 00:17:08.975 Arbitration 00:17:08.975 =========== 00:17:08.975 Arbitration Burst: 1 00:17:08.975 00:17:08.975 Power Management 00:17:08.975 ================ 00:17:08.975 Number of Power States: 1 00:17:08.975 Current Power State: Power State #0 00:17:08.975 Power State #0: 00:17:08.975 Max Power: 0.00 W 00:17:08.975 Non-Operational State: Operational 00:17:08.975 Entry Latency: Not Reported 00:17:08.975 Exit Latency: Not Reported 00:17:08.975 Relative Read Throughput: 0 00:17:08.975 Relative Read Latency: 0 00:17:08.975 Relative Write Throughput: 0 00:17:08.975 Relative Write Latency: 0 00:17:08.975 Idle Power: Not Reported 00:17:08.975 Active Power: Not Reported 00:17:08.975 Non-Operational Permissive Mode: Not Supported 00:17:08.975 00:17:08.975 Health Information 00:17:08.975 ================== 00:17:08.975 Critical Warnings: 00:17:08.975 Available Spare Space: OK 00:17:08.975 Temperature: OK 00:17:08.975 Device Reliability: OK 00:17:08.975 Read Only: No 00:17:08.975 Volatile Memory Backup: OK 00:17:08.975 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:08.975 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:08.975 Available Spare: 0% 00:17:08.975 Available Sp[2024-10-07 09:38:08.505714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:08.975 [2024-10-07 09:38:08.513621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:08.975 [2024-10-07 09:38:08.513642] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:08.975 [2024-10-07 09:38:08.513649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.975 [2024-10-07 09:38:08.513653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.975 [2024-10-07 09:38:08.513658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.975 [2024-10-07 09:38:08.513663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.975 [2024-10-07 09:38:08.513691] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:08.975 [2024-10-07 09:38:08.513699] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:08.975 [2024-10-07 09:38:08.514697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:08.975 [2024-10-07 09:38:08.514734] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:08.975 [2024-10-07 09:38:08.514739] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:08.975 [2024-10-07 09:38:08.515696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:08.975 [2024-10-07 09:38:08.515704] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:08.975 [2024-10-07 09:38:08.515749] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:08.975 [2024-10-07 09:38:08.516712] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:08.975 are Threshold: 0% 00:17:08.975 Life Percentage Used: 0% 00:17:08.975 Data Units Read: 0 00:17:08.975 Data Units Written: 0 00:17:08.975 Host Read Commands: 0 00:17:08.975 Host Write Commands: 0 00:17:08.975 Controller Busy Time: 0 minutes 00:17:08.975 Power Cycles: 0 00:17:08.975 Power On Hours: 0 hours 00:17:08.975 Unsafe Shutdowns: 0 00:17:08.975 Unrecoverable Media Errors: 0 00:17:08.975 Lifetime Error Log Entries: 0 00:17:08.975 Warning Temperature Time: 0 minutes 00:17:08.975 Critical Temperature Time: 0 minutes 00:17:08.975 00:17:08.975 Number of Queues 00:17:08.975 ================ 00:17:08.975 Number of I/O Submission Queues: 127 00:17:08.975 Number of I/O Completion Queues: 127 00:17:08.975 00:17:08.975 Active Namespaces 00:17:08.975 ================= 00:17:08.975 Namespace ID:1 00:17:08.975 Error Recovery Timeout: Unlimited 00:17:08.975 Command Set Identifier: NVM (00h) 00:17:08.975 Deallocate: Supported 00:17:08.975 Deallocated/Unwritten Error: Not Supported 00:17:08.975 Deallocated Read Value: Unknown 00:17:08.975 Deallocate in Write Zeroes: Not Supported 00:17:08.975 Deallocated Guard Field: 0xFFFF 00:17:08.975 Flush: Supported 00:17:08.975 Reservation: Supported 00:17:08.975 Namespace Sharing Capabilities: Multiple Controllers 00:17:08.975 Size (in LBAs): 131072 (0GiB) 00:17:08.975 Capacity (in LBAs): 131072 (0GiB) 00:17:08.975 Utilization (in LBAs): 131072 (0GiB) 00:17:08.975 NGUID: FF9EE0CF28124A47B3FBD0A99B2E5747 00:17:08.975 UUID: ff9ee0cf-2812-4a47-b3fb-d0a99b2e5747 00:17:08.975 Thin Provisioning: Not Supported 00:17:08.975 Per-NS Atomic Units: Yes 00:17:08.975 Atomic Boundary Size (Normal): 0 00:17:08.975 Atomic Boundary Size (PFail): 0 00:17:08.975 Atomic Boundary Offset: 0 00:17:08.975 Maximum Single Source Range Length: 65535 00:17:08.975 Maximum Copy Length: 65535 00:17:08.975 Maximum Source Range Count: 1 00:17:08.975 NGUID/EUI64 Never Reused: No 00:17:08.975 Namespace Write Protected: No 00:17:08.975 Number of LBA Formats: 1 00:17:08.975 Current LBA Format: LBA Format #00 00:17:08.975 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:08.975 00:17:08.975 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:09.235 [2024-10-07 09:38:08.696008] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:14.639 Initializing NVMe Controllers 00:17:14.639 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:14.639 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:14.639 Initialization complete. Launching workers. 00:17:14.639 ======================================================== 00:17:14.639 Latency(us) 00:17:14.639 Device Information : IOPS MiB/s Average min max 00:17:14.639 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39985.16 156.19 3201.06 840.59 7763.61 00:17:14.639 ======================================================== 00:17:14.639 Total : 39985.16 156.19 3201.06 840.59 7763.61 00:17:14.639 00:17:14.639 [2024-10-07 09:38:13.800804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:14.639 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:14.639 [2024-10-07 09:38:13.981341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:19.928 Initializing NVMe Controllers 00:17:19.928 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:19.928 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:19.928 Initialization complete. Launching workers. 00:17:19.928 ======================================================== 00:17:19.928 Latency(us) 00:17:19.928 Device Information : IOPS MiB/s Average min max 00:17:19.928 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40047.58 156.44 3196.07 846.27 6912.15 00:17:19.928 ======================================================== 00:17:19.928 Total : 40047.58 156.44 3196.07 846.27 6912.15 00:17:19.928 00:17:19.928 [2024-10-07 09:38:19.000868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:19.928 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:19.928 [2024-10-07 09:38:19.191006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:25.224 [2024-10-07 09:38:24.324697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:25.224 Initializing NVMe Controllers 00:17:25.224 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:25.224 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:25.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:25.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:25.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:25.224 Initialization complete. Launching workers. 00:17:25.224 Starting thread on core 2 00:17:25.224 Starting thread on core 3 00:17:25.224 Starting thread on core 1 00:17:25.224 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:25.224 [2024-10-07 09:38:24.561005] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:28.526 [2024-10-07 09:38:27.605162] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:28.526 Initializing NVMe Controllers 00:17:28.526 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:28.526 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:28.526 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:28.526 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:28.526 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:28.526 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:28.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:28.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:28.526 Initialization complete. Launching workers. 00:17:28.526 Starting thread on core 1 with urgent priority queue 00:17:28.526 Starting thread on core 2 with urgent priority queue 00:17:28.526 Starting thread on core 3 with urgent priority queue 00:17:28.526 Starting thread on core 0 with urgent priority queue 00:17:28.526 SPDK bdev Controller (SPDK2 ) core 0: 15138.33 IO/s 6.61 secs/100000 ios 00:17:28.526 SPDK bdev Controller (SPDK2 ) core 1: 15366.00 IO/s 6.51 secs/100000 ios 00:17:28.526 SPDK bdev Controller (SPDK2 ) core 2: 11196.00 IO/s 8.93 secs/100000 ios 00:17:28.526 SPDK bdev Controller (SPDK2 ) core 3: 10807.67 IO/s 9.25 secs/100000 ios 00:17:28.526 ======================================================== 00:17:28.526 00:17:28.526 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:28.526 [2024-10-07 09:38:27.833993] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:28.526 Initializing NVMe Controllers 00:17:28.526 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:28.526 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:28.526 Namespace ID: 1 size: 0GB 00:17:28.526 Initialization complete. 00:17:28.526 INFO: using host memory buffer for IO 00:17:28.526 Hello world! 00:17:28.526 [2024-10-07 09:38:27.846060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:28.526 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:28.526 [2024-10-07 09:38:28.069323] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:29.912 Initializing NVMe Controllers 00:17:29.912 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:29.912 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:29.912 Initialization complete. Launching workers. 00:17:29.912 submit (in ns) avg, min, max = 5660.8, 2821.7, 3997872.5 00:17:29.912 complete (in ns) avg, min, max = 14200.7, 1634.2, 6758432.5 00:17:29.912 00:17:29.912 Submit histogram 00:17:29.912 ================ 00:17:29.912 Range in us Cumulative Count 00:17:29.912 2.813 - 2.827: 0.1299% ( 27) 00:17:29.912 2.827 - 2.840: 0.6061% ( 99) 00:17:29.912 2.840 - 2.853: 2.0590% ( 302) 00:17:29.912 2.853 - 2.867: 5.2918% ( 672) 00:17:29.912 2.867 - 2.880: 10.9203% ( 1170) 00:17:29.912 2.880 - 2.893: 16.2890% ( 1116) 00:17:29.912 2.893 - 2.907: 21.7973% ( 1145) 00:17:29.912 2.907 - 2.920: 27.8107% ( 1250) 00:17:29.912 2.920 - 2.933: 33.4103% ( 1164) 00:17:29.912 2.933 - 2.947: 38.9089% ( 1143) 00:17:29.912 2.947 - 2.960: 44.9271% ( 1251) 00:17:29.912 2.960 - 2.973: 52.3115% ( 1535) 00:17:29.912 2.973 - 2.987: 60.8794% ( 1781) 00:17:29.912 2.987 - 3.000: 69.6878% ( 1831) 00:17:29.912 3.000 - 3.013: 78.3134% ( 1793) 00:17:29.912 3.013 - 3.027: 85.7459% ( 1545) 00:17:29.912 3.027 - 3.040: 91.3744% ( 1170) 00:17:29.912 3.040 - 3.053: 94.8093% ( 714) 00:17:29.912 3.053 - 3.067: 96.7143% ( 396) 00:17:29.912 3.067 - 3.080: 98.1094% ( 290) 00:17:29.912 3.080 - 3.093: 98.9801% ( 181) 00:17:29.912 3.093 - 3.107: 99.4564% ( 99) 00:17:29.912 3.107 - 3.120: 99.5286% ( 15) 00:17:29.912 3.120 - 3.133: 99.5670% ( 8) 00:17:29.912 3.133 - 3.147: 99.5767% ( 2) 00:17:29.912 3.160 - 3.173: 99.5863% ( 2) 00:17:29.912 3.200 - 3.213: 99.5911% ( 1) 00:17:29.912 3.373 - 3.387: 99.5959% ( 1) 00:17:29.912 3.440 - 3.467: 99.6055% ( 2) 00:17:29.912 3.547 - 3.573: 99.6103% ( 1) 00:17:29.912 3.627 - 3.653: 99.6200% ( 2) 00:17:29.912 3.787 - 3.813: 99.6248% ( 1) 00:17:29.912 4.027 - 4.053: 99.6296% ( 1) 00:17:29.912 4.053 - 4.080: 99.6344% ( 1) 00:17:29.912 4.320 - 4.347: 99.6392% ( 1) 00:17:29.913 4.427 - 4.453: 99.6440% ( 1) 00:17:29.913 4.533 - 4.560: 99.6488% ( 1) 00:17:29.913 4.560 - 4.587: 99.6536% ( 1) 00:17:29.913 4.587 - 4.613: 99.6584% ( 1) 00:17:29.913 4.640 - 4.667: 99.6633% ( 1) 00:17:29.913 4.773 - 4.800: 99.6681% ( 1) 00:17:29.913 4.800 - 4.827: 99.6729% ( 1) 00:17:29.913 4.827 - 4.853: 99.6777% ( 1) 00:17:29.913 4.880 - 4.907: 99.6825% ( 1) 00:17:29.913 4.907 - 4.933: 99.7017% ( 4) 00:17:29.913 4.933 - 4.960: 99.7065% ( 1) 00:17:29.913 4.960 - 4.987: 99.7114% ( 1) 00:17:29.913 5.067 - 5.093: 99.7162% ( 1) 00:17:29.913 5.093 - 5.120: 99.7258% ( 2) 00:17:29.913 5.173 - 5.200: 99.7354% ( 2) 00:17:29.913 5.253 - 5.280: 99.7402% ( 1) 00:17:29.913 5.280 - 5.307: 99.7498% ( 2) 00:17:29.913 5.333 - 5.360: 99.7547% ( 1) 00:17:29.913 5.360 - 5.387: 99.7643% ( 2) 00:17:29.913 5.573 - 5.600: 99.7691% ( 1) 00:17:29.913 5.760 - 5.787: 99.7739% ( 1) 00:17:29.913 5.867 - 5.893: 99.7787% ( 1) 00:17:29.913 5.947 - 5.973: 99.7835% ( 1) 00:17:29.913 5.973 - 6.000: 99.7883% ( 1) 00:17:29.913 6.000 - 6.027: 99.8028% ( 3) 00:17:29.913 6.027 - 6.053: 99.8076% ( 1) 00:17:29.913 6.213 - 6.240: 99.8220% ( 3) 00:17:29.913 6.293 - 6.320: 99.8268% ( 1) 00:17:29.913 6.347 - 6.373: 99.8364% ( 2) 00:17:29.913 6.373 - 6.400: 99.8412% ( 1) 00:17:29.913 6.427 - 6.453: 99.8461% ( 1) 00:17:29.913 6.453 - 6.480: 99.8509% ( 1) 00:17:29.913 6.480 - 6.507: 99.8557% ( 1) 00:17:29.913 6.507 - 6.533: 99.8653% ( 2) 00:17:29.913 6.640 - 6.667: 99.8701% ( 1) 00:17:29.913 6.773 - 6.800: 99.8749% ( 1) 00:17:29.913 6.827 - 6.880: 99.8797% ( 1) 00:17:29.913 6.880 - 6.933: 99.8845% ( 1) 00:17:29.913 6.933 - 6.987: 99.8942% ( 2) 00:17:29.913 6.987 - 7.040: 99.8990% ( 1) 00:17:29.913 7.040 - 7.093: 99.9038% ( 1) 00:17:29.913 7.093 - 7.147: 99.9134% ( 2) 00:17:29.913 7.413 - 7.467: 99.9230% ( 2) 00:17:29.913 8.267 - 8.320: 99.9278% ( 1) 00:17:29.913 [2024-10-07 09:38:29.162154] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:29.913 8.320 - 8.373: 99.9327% ( 1) 00:17:29.913 3986.773 - 4014.080: 100.0000% ( 14) 00:17:29.913 00:17:29.913 Complete histogram 00:17:29.913 ================== 00:17:29.913 Range in us Cumulative Count 00:17:29.913 1.633 - 1.640: 0.0144% ( 3) 00:17:29.913 1.640 - 1.647: 0.8659% ( 177) 00:17:29.913 1.647 - 1.653: 1.3855% ( 108) 00:17:29.913 1.653 - 1.660: 1.4913% ( 22) 00:17:29.913 1.660 - 1.667: 1.6404% ( 31) 00:17:29.913 1.667 - 1.673: 1.7030% ( 13) 00:17:29.913 1.673 - 1.680: 1.7463% ( 9) 00:17:29.913 1.680 - 1.687: 1.7607% ( 3) 00:17:29.913 1.687 - 1.693: 21.5904% ( 4122) 00:17:29.913 1.693 - 1.700: 46.6830% ( 5216) 00:17:29.913 1.700 - 1.707: 54.2599% ( 1575) 00:17:29.913 1.707 - 1.720: 74.3926% ( 4185) 00:17:29.913 1.720 - 1.733: 82.6382% ( 1714) 00:17:29.913 1.733 - 1.747: 84.0044% ( 284) 00:17:29.913 1.747 - 1.760: 86.8139% ( 584) 00:17:29.913 1.760 - 1.773: 92.0142% ( 1081) 00:17:29.913 1.773 - 1.787: 96.4593% ( 924) 00:17:29.913 1.787 - 1.800: 98.6290% ( 451) 00:17:29.913 1.800 - 1.813: 99.2784% ( 135) 00:17:29.913 1.813 - 1.827: 99.4564% ( 37) 00:17:29.913 1.827 - 1.840: 99.4756% ( 4) 00:17:29.913 1.840 - 1.853: 99.4804% ( 1) 00:17:29.913 1.920 - 1.933: 99.4853% ( 1) 00:17:29.913 1.987 - 2.000: 99.4901% ( 1) 00:17:29.913 2.013 - 2.027: 99.4949% ( 1) 00:17:29.913 2.173 - 2.187: 99.4997% ( 1) 00:17:29.913 3.360 - 3.373: 99.5045% ( 1) 00:17:29.913 3.493 - 3.520: 99.5093% ( 1) 00:17:29.913 3.787 - 3.813: 99.5141% ( 1) 00:17:29.913 3.920 - 3.947: 99.5189% ( 1) 00:17:29.913 4.107 - 4.133: 99.5237% ( 1) 00:17:29.913 4.267 - 4.293: 99.5334% ( 2) 00:17:29.913 4.373 - 4.400: 99.5382% ( 1) 00:17:29.913 4.400 - 4.427: 99.5430% ( 1) 00:17:29.913 4.507 - 4.533: 99.5478% ( 1) 00:17:29.913 4.533 - 4.560: 99.5574% ( 2) 00:17:29.913 4.587 - 4.613: 99.5622% ( 1) 00:17:29.913 4.640 - 4.667: 99.5718% ( 2) 00:17:29.913 4.667 - 4.693: 99.5767% ( 1) 00:17:29.913 4.693 - 4.720: 99.5815% ( 1) 00:17:29.913 4.747 - 4.773: 99.5863% ( 1) 00:17:29.913 4.960 - 4.987: 99.5911% ( 1) 00:17:29.913 5.067 - 5.093: 99.5959% ( 1) 00:17:29.913 5.093 - 5.120: 99.6007% ( 1) 00:17:29.913 5.147 - 5.173: 99.6055% ( 1) 00:17:29.913 5.200 - 5.227: 99.6103% ( 1) 00:17:29.913 5.227 - 5.253: 99.6151% ( 1) 00:17:29.913 5.253 - 5.280: 99.6200% ( 1) 00:17:29.913 5.307 - 5.333: 99.6248% ( 1) 00:17:29.913 5.493 - 5.520: 99.6296% ( 1) 00:17:29.913 5.573 - 5.600: 99.6344% ( 1) 00:17:29.913 5.600 - 5.627: 99.6440% ( 2) 00:17:29.913 5.680 - 5.707: 99.6488% ( 1) 00:17:29.913 5.867 - 5.893: 99.6536% ( 1) 00:17:29.913 6.107 - 6.133: 99.6584% ( 1) 00:17:29.913 6.213 - 6.240: 99.6633% ( 1) 00:17:29.913 6.453 - 6.480: 99.6681% ( 1) 00:17:29.913 6.560 - 6.587: 99.6729% ( 1) 00:17:29.913 8.213 - 8.267: 99.6777% ( 1) 00:17:29.913 9.867 - 9.920: 99.6825% ( 1) 00:17:29.913 33.707 - 33.920: 99.6873% ( 1) 00:17:29.913 1024.000 - 1030.827: 99.6921% ( 1) 00:17:29.913 3986.773 - 4014.080: 99.9952% ( 63) 00:17:29.913 6744.747 - 6772.053: 100.0000% ( 1) 00:17:29.913 00:17:29.913 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:29.913 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:29.913 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:29.913 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:29.913 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:29.913 [ 00:17:29.913 { 00:17:29.913 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:29.913 "subtype": "Discovery", 00:17:29.913 "listen_addresses": [], 00:17:29.913 "allow_any_host": true, 00:17:29.913 "hosts": [] 00:17:29.913 }, 00:17:29.913 { 00:17:29.913 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:29.913 "subtype": "NVMe", 00:17:29.913 "listen_addresses": [ 00:17:29.913 { 00:17:29.913 "trtype": "VFIOUSER", 00:17:29.913 "adrfam": "IPv4", 00:17:29.913 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:29.913 "trsvcid": "0" 00:17:29.913 } 00:17:29.913 ], 00:17:29.913 "allow_any_host": true, 00:17:29.913 "hosts": [], 00:17:29.913 "serial_number": "SPDK1", 00:17:29.913 "model_number": "SPDK bdev Controller", 00:17:29.913 "max_namespaces": 32, 00:17:29.913 "min_cntlid": 1, 00:17:29.913 "max_cntlid": 65519, 00:17:29.913 "namespaces": [ 00:17:29.913 { 00:17:29.913 "nsid": 1, 00:17:29.913 "bdev_name": "Malloc1", 00:17:29.913 "name": "Malloc1", 00:17:29.913 "nguid": "C249F6752468476490C04B146C5544AD", 00:17:29.913 "uuid": "c249f675-2468-4764-90c0-4b146c5544ad" 00:17:29.913 }, 00:17:29.913 { 00:17:29.913 "nsid": 2, 00:17:29.913 "bdev_name": "Malloc3", 00:17:29.913 "name": "Malloc3", 00:17:29.913 "nguid": "FC6D544F76E340ED83AEC71016EA3454", 00:17:29.913 "uuid": "fc6d544f-76e3-40ed-83ae-c71016ea3454" 00:17:29.913 } 00:17:29.913 ] 00:17:29.913 }, 00:17:29.913 { 00:17:29.913 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:29.913 "subtype": "NVMe", 00:17:29.913 "listen_addresses": [ 00:17:29.913 { 00:17:29.913 "trtype": "VFIOUSER", 00:17:29.913 "adrfam": "IPv4", 00:17:29.913 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:29.913 "trsvcid": "0" 00:17:29.913 } 00:17:29.913 ], 00:17:29.913 "allow_any_host": true, 00:17:29.913 "hosts": [], 00:17:29.913 "serial_number": "SPDK2", 00:17:29.913 "model_number": "SPDK bdev Controller", 00:17:29.913 "max_namespaces": 32, 00:17:29.913 "min_cntlid": 1, 00:17:29.913 "max_cntlid": 65519, 00:17:29.913 "namespaces": [ 00:17:29.913 { 00:17:29.913 "nsid": 1, 00:17:29.913 "bdev_name": "Malloc2", 00:17:29.913 "name": "Malloc2", 00:17:29.913 "nguid": "FF9EE0CF28124A47B3FBD0A99B2E5747", 00:17:29.913 "uuid": "ff9ee0cf-2812-4a47-b3fb-d0a99b2e5747" 00:17:29.913 } 00:17:29.913 ] 00:17:29.913 } 00:17:29.913 ] 00:17:29.913 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:29.913 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:29.914 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3322688 00:17:29.914 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:29.914 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- sync/functions.sh@10 -- # local i=0 00:17:29.914 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- sync/functions.sh@11 -- # [[ ! -e /tmp/aer_touch_file ]] 00:17:29.914 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- sync/functions.sh@15 -- # [[ ! -e /tmp/aer_touch_file ]] 00:17:29.914 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- sync/functions.sh@19 -- # return 0 00:17:29.914 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:29.914 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:29.914 [2024-10-07 09:38:29.528004] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:29.914 Malloc4 00:17:30.176 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:30.176 [2024-10-07 09:38:29.732367] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:30.176 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:30.176 Asynchronous Event Request test 00:17:30.176 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:30.176 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:30.176 Registering asynchronous event callbacks... 00:17:30.176 Starting namespace attribute notice tests for all controllers... 00:17:30.176 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:30.176 aer_cb - Changed Namespace 00:17:30.176 Cleaning up... 00:17:30.437 [ 00:17:30.437 { 00:17:30.437 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:30.437 "subtype": "Discovery", 00:17:30.437 "listen_addresses": [], 00:17:30.437 "allow_any_host": true, 00:17:30.437 "hosts": [] 00:17:30.437 }, 00:17:30.437 { 00:17:30.437 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:30.437 "subtype": "NVMe", 00:17:30.437 "listen_addresses": [ 00:17:30.437 { 00:17:30.437 "trtype": "VFIOUSER", 00:17:30.437 "adrfam": "IPv4", 00:17:30.437 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:30.437 "trsvcid": "0" 00:17:30.437 } 00:17:30.437 ], 00:17:30.437 "allow_any_host": true, 00:17:30.437 "hosts": [], 00:17:30.437 "serial_number": "SPDK1", 00:17:30.437 "model_number": "SPDK bdev Controller", 00:17:30.437 "max_namespaces": 32, 00:17:30.437 "min_cntlid": 1, 00:17:30.437 "max_cntlid": 65519, 00:17:30.437 "namespaces": [ 00:17:30.437 { 00:17:30.437 "nsid": 1, 00:17:30.437 "bdev_name": "Malloc1", 00:17:30.437 "name": "Malloc1", 00:17:30.437 "nguid": "C249F6752468476490C04B146C5544AD", 00:17:30.437 "uuid": "c249f675-2468-4764-90c0-4b146c5544ad" 00:17:30.437 }, 00:17:30.437 { 00:17:30.437 "nsid": 2, 00:17:30.437 "bdev_name": "Malloc3", 00:17:30.437 "name": "Malloc3", 00:17:30.437 "nguid": "FC6D544F76E340ED83AEC71016EA3454", 00:17:30.437 "uuid": "fc6d544f-76e3-40ed-83ae-c71016ea3454" 00:17:30.437 } 00:17:30.437 ] 00:17:30.437 }, 00:17:30.437 { 00:17:30.437 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:30.437 "subtype": "NVMe", 00:17:30.437 "listen_addresses": [ 00:17:30.437 { 00:17:30.437 "trtype": "VFIOUSER", 00:17:30.437 "adrfam": "IPv4", 00:17:30.437 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:30.437 "trsvcid": "0" 00:17:30.437 } 00:17:30.437 ], 00:17:30.437 "allow_any_host": true, 00:17:30.437 "hosts": [], 00:17:30.437 "serial_number": "SPDK2", 00:17:30.437 "model_number": "SPDK bdev Controller", 00:17:30.437 "max_namespaces": 32, 00:17:30.437 "min_cntlid": 1, 00:17:30.437 "max_cntlid": 65519, 00:17:30.437 "namespaces": [ 00:17:30.437 { 00:17:30.437 "nsid": 1, 00:17:30.437 "bdev_name": "Malloc2", 00:17:30.437 "name": "Malloc2", 00:17:30.437 "nguid": "FF9EE0CF28124A47B3FBD0A99B2E5747", 00:17:30.437 "uuid": "ff9ee0cf-2812-4a47-b3fb-d0a99b2e5747" 00:17:30.437 }, 00:17:30.437 { 00:17:30.437 "nsid": 2, 00:17:30.437 "bdev_name": "Malloc4", 00:17:30.437 "name": "Malloc4", 00:17:30.437 "nguid": "70E459209D7B4CB5B7C71B6E5BBCC070", 00:17:30.437 "uuid": "70e45920-9d7b-4cb5-b7c7-1b6e5bbcc070" 00:17:30.437 } 00:17:30.437 ] 00:17:30.437 } 00:17:30.437 ] 00:17:30.438 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3322688 00:17:30.438 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:30.438 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3313767 00:17:30.438 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' -z 3313767 ']' 00:17:30.438 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # kill -0 3313767 00:17:30.438 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # uname 00:17:30.438 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:30.438 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3313767 00:17:30.438 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:30.438 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:30.438 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3313767' 00:17:30.438 killing process with pid 3313767 00:17:30.438 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # kill 3313767 00:17:30.438 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@977 -- # wait 3313767 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3323019 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3323019' 00:17:30.699 Process pid: 3323019 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3323019 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # '[' -z 3323019 ']' 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:30.699 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:30.699 [2024-10-07 09:38:30.231420] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:30.699 [2024-10-07 09:38:30.232357] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:30.699 [2024-10-07 09:38:30.232399] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.699 [2024-10-07 09:38:30.309003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.961 [2024-10-07 09:38:30.365129] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.961 [2024-10-07 09:38:30.365164] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.961 [2024-10-07 09:38:30.365170] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.961 [2024-10-07 09:38:30.365175] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.961 [2024-10-07 09:38:30.365179] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.961 [2024-10-07 09:38:30.366445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.961 [2024-10-07 09:38:30.366596] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.961 [2024-10-07 09:38:30.366748] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:30.961 [2024-10-07 09:38:30.366852] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.961 [2024-10-07 09:38:30.432069] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:30.961 [2024-10-07 09:38:30.432997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:30.961 [2024-10-07 09:38:30.433857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:30.961 [2024-10-07 09:38:30.434520] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:30.961 [2024-10-07 09:38:30.434539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:31.533 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:31.533 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@867 -- # return 0 00:17:31.533 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:32.476 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:32.736 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:32.736 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:32.736 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:32.736 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:32.736 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:32.997 Malloc1 00:17:32.997 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:32.997 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:33.258 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:33.520 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:33.520 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:33.520 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:33.520 Malloc2 00:17:33.781 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:33.781 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:34.042 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3323019 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' -z 3323019 ']' 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # kill -0 3323019 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # uname 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3323019 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3323019' 00:17:34.304 killing process with pid 3323019 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # kill 3323019 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@977 -- # wait 3323019 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:34.304 00:17:34.304 real 0m50.865s 00:17:34.304 user 3m14.684s 00:17:34.304 sys 0m2.769s 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:34.304 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:34.304 ************************************ 00:17:34.304 END TEST nvmf_vfio_user 00:17:34.304 ************************************ 00:17:34.566 09:38:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:34.566 09:38:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:17:34.566 09:38:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:34.566 09:38:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.566 ************************************ 00:17:34.566 START TEST nvmf_vfio_user_nvme_compliance 00:17:34.566 ************************************ 00:17:34.566 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:34.566 * Looking for test storage... 00:17:34.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:34.566 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:17:34.566 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1626 -- # lcov --version 00:17:34.566 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.828 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:17:34.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.829 --rc genhtml_branch_coverage=1 00:17:34.829 --rc genhtml_function_coverage=1 00:17:34.829 --rc genhtml_legend=1 00:17:34.829 --rc geninfo_all_blocks=1 00:17:34.829 --rc geninfo_unexecuted_blocks=1 00:17:34.829 00:17:34.829 ' 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:17:34.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.829 --rc genhtml_branch_coverage=1 00:17:34.829 --rc genhtml_function_coverage=1 00:17:34.829 --rc genhtml_legend=1 00:17:34.829 --rc geninfo_all_blocks=1 00:17:34.829 --rc geninfo_unexecuted_blocks=1 00:17:34.829 00:17:34.829 ' 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:17:34.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.829 --rc genhtml_branch_coverage=1 00:17:34.829 --rc genhtml_function_coverage=1 00:17:34.829 --rc genhtml_legend=1 00:17:34.829 --rc geninfo_all_blocks=1 00:17:34.829 --rc geninfo_unexecuted_blocks=1 00:17:34.829 00:17:34.829 ' 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:17:34.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.829 --rc genhtml_branch_coverage=1 00:17:34.829 --rc genhtml_function_coverage=1 00:17:34.829 --rc genhtml_legend=1 00:17:34.829 --rc geninfo_all_blocks=1 00:17:34.829 --rc geninfo_unexecuted_blocks=1 00:17:34.829 00:17:34.829 ' 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3323786 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3323786' 00:17:34.829 Process pid: 3323786 00:17:34.829 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:34.830 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:34.830 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3323786 00:17:34.830 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # '[' -z 3323786 ']' 00:17:34.830 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.830 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:34.830 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.830 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:34.830 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:34.830 [2024-10-07 09:38:34.365342] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:34.830 [2024-10-07 09:38:34.365416] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.830 [2024-10-07 09:38:34.446767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.089 [2024-10-07 09:38:34.507897] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.089 [2024-10-07 09:38:34.507934] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.089 [2024-10-07 09:38:34.507939] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.089 [2024-10-07 09:38:34.507944] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.089 [2024-10-07 09:38:34.507948] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.089 [2024-10-07 09:38:34.508812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.089 [2024-10-07 09:38:34.509019] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.089 [2024-10-07 09:38:34.509020] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.661 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:35.661 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@867 -- # return 0 00:17:35.661 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:36.603 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:36.603 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:36.603 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:36.603 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:36.603 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:36.603 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:36.603 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:36.603 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:36.603 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 malloc0 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:36.604 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:36.864 00:17:36.864 00:17:36.864 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.864 http://cunit.sourceforge.net/ 00:17:36.864 00:17:36.864 00:17:36.864 Suite: nvme_compliance 00:17:36.864 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-07 09:38:36.378016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:36.864 [2024-10-07 09:38:36.379283] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:36.864 [2024-10-07 09:38:36.379295] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:36.864 [2024-10-07 09:38:36.379300] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:36.864 [2024-10-07 09:38:36.381036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:36.864 passed 00:17:36.864 Test: admin_identify_ctrlr_verify_fused ...[2024-10-07 09:38:36.460526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:36.864 [2024-10-07 09:38:36.463541] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:36.864 passed 00:17:37.125 Test: admin_identify_ns ...[2024-10-07 09:38:36.535984] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:37.125 [2024-10-07 09:38:36.595624] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:37.125 [2024-10-07 09:38:36.603631] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:37.125 [2024-10-07 09:38:36.624710] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:37.125 passed 00:17:37.125 Test: admin_get_features_mandatory_features ...[2024-10-07 09:38:36.700758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:37.125 [2024-10-07 09:38:36.703777] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:37.125 passed 00:17:37.125 Test: admin_get_features_optional_features ...[2024-10-07 09:38:36.781233] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:37.125 [2024-10-07 09:38:36.784254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:37.385 passed 00:17:37.385 Test: admin_set_features_number_of_queues ...[2024-10-07 09:38:36.859016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:37.385 [2024-10-07 09:38:36.964709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:37.385 passed 00:17:37.385 Test: admin_get_log_page_mandatory_logs ...[2024-10-07 09:38:37.037944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:37.385 [2024-10-07 09:38:37.040966] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:37.644 passed 00:17:37.644 Test: admin_get_log_page_with_lpo ...[2024-10-07 09:38:37.115714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:37.644 [2024-10-07 09:38:37.185626] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:37.644 [2024-10-07 09:38:37.198670] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:37.644 passed 00:17:37.645 Test: fabric_property_get ...[2024-10-07 09:38:37.272870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:37.645 [2024-10-07 09:38:37.274075] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:37.645 [2024-10-07 09:38:37.275891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:37.645 passed 00:17:37.904 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-07 09:38:37.350332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:37.904 [2024-10-07 09:38:37.351533] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:37.904 [2024-10-07 09:38:37.353355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:37.904 passed 00:17:37.904 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-07 09:38:37.430101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:37.904 [2024-10-07 09:38:37.514625] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:37.904 [2024-10-07 09:38:37.530622] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:37.904 [2024-10-07 09:38:37.535702] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:37.904 passed 00:17:38.164 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-07 09:38:37.609942] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:38.164 [2024-10-07 09:38:37.611145] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:38.164 [2024-10-07 09:38:37.612963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:38.164 passed 00:17:38.164 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-07 09:38:37.686689] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:38.164 [2024-10-07 09:38:37.764621] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:38.164 [2024-10-07 09:38:37.788621] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:38.164 [2024-10-07 09:38:37.793685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:38.164 passed 00:17:38.424 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-07 09:38:37.866848] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:38.424 [2024-10-07 09:38:37.868039] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:38.424 [2024-10-07 09:38:37.868059] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:38.424 [2024-10-07 09:38:37.869864] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:38.424 passed 00:17:38.424 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-07 09:38:37.945605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:38.424 [2024-10-07 09:38:38.039626] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:38.424 [2024-10-07 09:38:38.047623] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:38.424 [2024-10-07 09:38:38.055624] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:38.424 [2024-10-07 09:38:38.063629] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:38.684 [2024-10-07 09:38:38.092687] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:38.684 passed 00:17:38.684 Test: admin_create_io_sq_verify_pc ...[2024-10-07 09:38:38.165875] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:38.684 [2024-10-07 09:38:38.182627] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:38.684 [2024-10-07 09:38:38.199026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:38.684 passed 00:17:38.684 Test: admin_create_io_qp_max_qps ...[2024-10-07 09:38:38.277482] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.065 [2024-10-07 09:38:39.384624] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:40.326 [2024-10-07 09:38:39.783971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.326 passed 00:17:40.326 Test: admin_create_io_sq_shared_cq ...[2024-10-07 09:38:39.857988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.588 [2024-10-07 09:38:39.993625] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:40.588 [2024-10-07 09:38:40.030678] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.588 passed 00:17:40.588 00:17:40.588 Run Summary: Type Total Ran Passed Failed Inactive 00:17:40.588 suites 1 1 n/a 0 0 00:17:40.588 tests 18 18 18 0 0 00:17:40.588 asserts 360 360 360 0 n/a 00:17:40.588 00:17:40.588 Elapsed time = 1.504 seconds 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3323786 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' -z 3323786 ']' 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # kill -0 3323786 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # uname 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3323786 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3323786' 00:17:40.588 killing process with pid 3323786 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # kill 3323786 00:17:40.588 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@977 -- # wait 3323786 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:40.850 00:17:40.850 real 0m6.239s 00:17:40.850 user 0m17.486s 00:17:40.850 sys 0m0.574s 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:40.850 ************************************ 00:17:40.850 END TEST nvmf_vfio_user_nvme_compliance 00:17:40.850 ************************************ 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:40.850 ************************************ 00:17:40.850 START TEST nvmf_vfio_user_fuzz 00:17:40.850 ************************************ 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:40.850 * Looking for test storage... 00:17:40.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1626 -- # lcov --version 00:17:40.850 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:17:41.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.112 --rc genhtml_branch_coverage=1 00:17:41.112 --rc genhtml_function_coverage=1 00:17:41.112 --rc genhtml_legend=1 00:17:41.112 --rc geninfo_all_blocks=1 00:17:41.112 --rc geninfo_unexecuted_blocks=1 00:17:41.112 00:17:41.112 ' 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:17:41.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.112 --rc genhtml_branch_coverage=1 00:17:41.112 --rc genhtml_function_coverage=1 00:17:41.112 --rc genhtml_legend=1 00:17:41.112 --rc geninfo_all_blocks=1 00:17:41.112 --rc geninfo_unexecuted_blocks=1 00:17:41.112 00:17:41.112 ' 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:17:41.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.112 --rc genhtml_branch_coverage=1 00:17:41.112 --rc genhtml_function_coverage=1 00:17:41.112 --rc genhtml_legend=1 00:17:41.112 --rc geninfo_all_blocks=1 00:17:41.112 --rc geninfo_unexecuted_blocks=1 00:17:41.112 00:17:41.112 ' 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:17:41.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.112 --rc genhtml_branch_coverage=1 00:17:41.112 --rc genhtml_function_coverage=1 00:17:41.112 --rc genhtml_legend=1 00:17:41.112 --rc geninfo_all_blocks=1 00:17:41.112 --rc geninfo_unexecuted_blocks=1 00:17:41.112 00:17:41.112 ' 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.112 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3325194 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3325194' 00:17:41.113 Process pid: 3325194 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3325194 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # '[' -z 3325194 ']' 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:41.113 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:42.054 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:42.054 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@867 -- # return 0 00:17:42.054 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:42.995 malloc0 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:42.995 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:42.996 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:42.996 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:15.103 Fuzzing completed. Shutting down the fuzz application 00:18:15.103 00:18:15.103 Dumping successful admin opcodes: 00:18:15.103 8, 9, 10, 24, 00:18:15.103 Dumping successful io opcodes: 00:18:15.103 0, 00:18:15.103 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1290173, total successful commands: 5063, random_seed: 1142341504 00:18:15.103 NS: 0x200003a1ef00 admin qp, Total commands completed: 287955, total successful commands: 2324, random_seed: 4117422784 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3325194 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' -z 3325194 ']' 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # kill -0 3325194 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # uname 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3325194 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3325194' 00:18:15.103 killing process with pid 3325194 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # kill 3325194 00:18:15.103 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@977 -- # wait 3325194 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:15.103 00:18:15.103 real 0m32.890s 00:18:15.103 user 0m38.320s 00:18:15.103 sys 0m23.550s 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:15.103 ************************************ 00:18:15.103 END TEST nvmf_vfio_user_fuzz 00:18:15.103 ************************************ 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:15.103 ************************************ 00:18:15.103 START TEST nvmf_auth_target 00:18:15.103 ************************************ 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:15.103 * Looking for test storage... 00:18:15.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1626 -- # lcov --version 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:15.103 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:18:15.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.104 --rc genhtml_branch_coverage=1 00:18:15.104 --rc genhtml_function_coverage=1 00:18:15.104 --rc genhtml_legend=1 00:18:15.104 --rc geninfo_all_blocks=1 00:18:15.104 --rc geninfo_unexecuted_blocks=1 00:18:15.104 00:18:15.104 ' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:18:15.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.104 --rc genhtml_branch_coverage=1 00:18:15.104 --rc genhtml_function_coverage=1 00:18:15.104 --rc genhtml_legend=1 00:18:15.104 --rc geninfo_all_blocks=1 00:18:15.104 --rc geninfo_unexecuted_blocks=1 00:18:15.104 00:18:15.104 ' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:18:15.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.104 --rc genhtml_branch_coverage=1 00:18:15.104 --rc genhtml_function_coverage=1 00:18:15.104 --rc genhtml_legend=1 00:18:15.104 --rc geninfo_all_blocks=1 00:18:15.104 --rc geninfo_unexecuted_blocks=1 00:18:15.104 00:18:15.104 ' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:18:15.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.104 --rc genhtml_branch_coverage=1 00:18:15.104 --rc genhtml_function_coverage=1 00:18:15.104 --rc genhtml_legend=1 00:18:15.104 --rc geninfo_all_blocks=1 00:18:15.104 --rc geninfo_unexecuted_blocks=1 00:18:15.104 00:18:15.104 ' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:15.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:15.104 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:21.695 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:21.695 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:21.695 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:21.696 Found net devices under 0000:31:00.0: cvl_0_0 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:21.696 Found net devices under 0000:31:00.1: cvl_0_1 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.696 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:21.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:21.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:18:21.696 00:18:21.696 --- 10.0.0.2 ping statistics --- 00:18:21.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.696 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:21.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:21.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:18:21.696 00:18:21.696 --- 10.0.0.1 ping statistics --- 00:18:21.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.696 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3335797 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3335797 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # '[' -z 3335797 ']' 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:21.696 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@867 -- # return 0 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@733 -- # xtrace_disable 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3336023 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=402bed890b8c1a0a36d1522b98ae9cc4ae43a45dcccaded5 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.We2 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 402bed890b8c1a0a36d1522b98ae9cc4ae43a45dcccaded5 0 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 402bed890b8c1a0a36d1522b98ae9cc4ae43a45dcccaded5 0 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=402bed890b8c1a0a36d1522b98ae9cc4ae43a45dcccaded5 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:18:22.641 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.We2 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.We2 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.We2 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=46594c644bf81ac308fd1db68f7e803e7d03a7e21ee99558e2452e56939b2fe8 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.OOd 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 46594c644bf81ac308fd1db68f7e803e7d03a7e21ee99558e2452e56939b2fe8 3 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 46594c644bf81ac308fd1db68f7e803e7d03a7e21ee99558e2452e56939b2fe8 3 00:18:22.904 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=46594c644bf81ac308fd1db68f7e803e7d03a7e21ee99558e2452e56939b2fe8 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.OOd 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.OOd 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.OOd 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=453ea0ccd49ed79f3aacf3aa2ce41069 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.aTY 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 453ea0ccd49ed79f3aacf3aa2ce41069 1 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 453ea0ccd49ed79f3aacf3aa2ce41069 1 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=453ea0ccd49ed79f3aacf3aa2ce41069 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.aTY 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.aTY 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.aTY 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=befdb67dc4c71a9d3ada0bec2f2c2957cbae6da28480044b 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.npl 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key befdb67dc4c71a9d3ada0bec2f2c2957cbae6da28480044b 2 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 befdb67dc4c71a9d3ada0bec2f2c2957cbae6da28480044b 2 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=befdb67dc4c71a9d3ada0bec2f2c2957cbae6da28480044b 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.npl 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.npl 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.npl 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=427afcd9b3d01cec598de96622dd0d4937417416622a0466 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.x2b 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 427afcd9b3d01cec598de96622dd0d4937417416622a0466 2 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 427afcd9b3d01cec598de96622dd0d4937417416622a0466 2 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=427afcd9b3d01cec598de96622dd0d4937417416622a0466 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:18:22.905 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.x2b 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.x2b 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.x2b 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=3ed958c24cd21e9911eebfc74f892129 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.ipF 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 3ed958c24cd21e9911eebfc74f892129 1 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 3ed958c24cd21e9911eebfc74f892129 1 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=3ed958c24cd21e9911eebfc74f892129 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.ipF 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.ipF 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ipF 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=8e636fd7bd7e916afa7cead757818198912aaceb19a0d54d0e9963d6b7c1e9f7 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Qzq 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 8e636fd7bd7e916afa7cead757818198912aaceb19a0d54d0e9963d6b7c1e9f7 3 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 8e636fd7bd7e916afa7cead757818198912aaceb19a0d54d0e9963d6b7c1e9f7 3 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=8e636fd7bd7e916afa7cead757818198912aaceb19a0d54d0e9963d6b7c1e9f7 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Qzq 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Qzq 00:18:23.167 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Qzq 00:18:23.168 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:23.168 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3335797 00:18:23.168 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # '[' -z 3335797 ']' 00:18:23.168 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.168 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:23.168 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.168 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:23.168 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.429 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:23.430 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@867 -- # return 0 00:18:23.430 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3336023 /var/tmp/host.sock 00:18:23.430 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # '[' -z 3336023 ']' 00:18:23.430 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/host.sock 00:18:23.430 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:23.430 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:23.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:23.430 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:23.430 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@867 -- # return 0 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.We2 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.We2 00:18:23.692 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.We2 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.OOd ]] 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OOd 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OOd 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OOd 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.aTY 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.aTY 00:18:23.954 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.aTY 00:18:24.216 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.npl ]] 00:18:24.216 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.npl 00:18:24.216 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:24.216 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.216 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:24.216 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.npl 00:18:24.216 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.npl 00:18:24.477 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:24.477 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.x2b 00:18:24.477 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:24.477 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.477 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:24.477 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.x2b 00:18:24.477 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.x2b 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ipF ]] 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ipF 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ipF 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ipF 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Qzq 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Qzq 00:18:24.738 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Qzq 00:18:24.997 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:24.997 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:24.997 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.997 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.997 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:24.997 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:25.257 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:25.257 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.257 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.257 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:25.257 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:25.257 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.257 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.257 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:25.258 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.258 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:25.258 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.258 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.258 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.517 00:18:25.517 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.517 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.517 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.778 { 00:18:25.778 "cntlid": 1, 00:18:25.778 "qid": 0, 00:18:25.778 "state": "enabled", 00:18:25.778 "thread": "nvmf_tgt_poll_group_000", 00:18:25.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:25.778 "listen_address": { 00:18:25.778 "trtype": "TCP", 00:18:25.778 "adrfam": "IPv4", 00:18:25.778 "traddr": "10.0.0.2", 00:18:25.778 "trsvcid": "4420" 00:18:25.778 }, 00:18:25.778 "peer_address": { 00:18:25.778 "trtype": "TCP", 00:18:25.778 "adrfam": "IPv4", 00:18:25.778 "traddr": "10.0.0.1", 00:18:25.778 "trsvcid": "44592" 00:18:25.778 }, 00:18:25.778 "auth": { 00:18:25.778 "state": "completed", 00:18:25.778 "digest": "sha256", 00:18:25.778 "dhgroup": "null" 00:18:25.778 } 00:18:25.778 } 00:18:25.778 ]' 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.778 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.038 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:26.038 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:26.608 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.608 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:26.608 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:26.608 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.608 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:26.608 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.608 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.608 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.869 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.130 00:18:27.130 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.130 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.130 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.130 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.130 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.130 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:27.130 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.130 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:27.131 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.131 { 00:18:27.131 "cntlid": 3, 00:18:27.131 "qid": 0, 00:18:27.131 "state": "enabled", 00:18:27.131 "thread": "nvmf_tgt_poll_group_000", 00:18:27.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:27.131 "listen_address": { 00:18:27.131 "trtype": "TCP", 00:18:27.131 "adrfam": "IPv4", 00:18:27.131 "traddr": "10.0.0.2", 00:18:27.131 "trsvcid": "4420" 00:18:27.131 }, 00:18:27.131 "peer_address": { 00:18:27.131 "trtype": "TCP", 00:18:27.131 "adrfam": "IPv4", 00:18:27.131 "traddr": "10.0.0.1", 00:18:27.131 "trsvcid": "44610" 00:18:27.131 }, 00:18:27.131 "auth": { 00:18:27.131 "state": "completed", 00:18:27.131 "digest": "sha256", 00:18:27.131 "dhgroup": "null" 00:18:27.131 } 00:18:27.131 } 00:18:27.131 ]' 00:18:27.131 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.391 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.391 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.391 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:27.391 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.391 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.391 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.391 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.391 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:18:27.391 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.335 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.596 00:18:28.596 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.596 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.596 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.856 { 00:18:28.856 "cntlid": 5, 00:18:28.856 "qid": 0, 00:18:28.856 "state": "enabled", 00:18:28.856 "thread": "nvmf_tgt_poll_group_000", 00:18:28.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:28.856 "listen_address": { 00:18:28.856 "trtype": "TCP", 00:18:28.856 "adrfam": "IPv4", 00:18:28.856 "traddr": "10.0.0.2", 00:18:28.856 "trsvcid": "4420" 00:18:28.856 }, 00:18:28.856 "peer_address": { 00:18:28.856 "trtype": "TCP", 00:18:28.856 "adrfam": "IPv4", 00:18:28.856 "traddr": "10.0.0.1", 00:18:28.856 "trsvcid": "44636" 00:18:28.856 }, 00:18:28.856 "auth": { 00:18:28.856 "state": "completed", 00:18:28.856 "digest": "sha256", 00:18:28.856 "dhgroup": "null" 00:18:28.856 } 00:18:28.856 } 00:18:28.856 ]' 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.856 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.116 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:18:29.116 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:18:29.689 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.951 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.213 00:18:30.213 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.213 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.213 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.475 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.475 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.475 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:30.475 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.475 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:30.475 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.475 { 00:18:30.475 "cntlid": 7, 00:18:30.475 "qid": 0, 00:18:30.475 "state": "enabled", 00:18:30.475 "thread": "nvmf_tgt_poll_group_000", 00:18:30.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:30.475 "listen_address": { 00:18:30.475 "trtype": "TCP", 00:18:30.475 "adrfam": "IPv4", 00:18:30.475 "traddr": "10.0.0.2", 00:18:30.475 "trsvcid": "4420" 00:18:30.475 }, 00:18:30.475 "peer_address": { 00:18:30.475 "trtype": "TCP", 00:18:30.475 "adrfam": "IPv4", 00:18:30.475 "traddr": "10.0.0.1", 00:18:30.475 "trsvcid": "44662" 00:18:30.475 }, 00:18:30.475 "auth": { 00:18:30.475 "state": "completed", 00:18:30.475 "digest": "sha256", 00:18:30.475 "dhgroup": "null" 00:18:30.475 } 00:18:30.475 } 00:18:30.475 ]' 00:18:30.475 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.475 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.475 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.475 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:30.475 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.475 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.475 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.475 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.738 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:18:30.738 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:18:31.311 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.573 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:31.573 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:31.573 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.573 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:31.573 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.573 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.573 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.573 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.573 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.834 00:18:31.834 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.834 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.834 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.094 { 00:18:32.094 "cntlid": 9, 00:18:32.094 "qid": 0, 00:18:32.094 "state": "enabled", 00:18:32.094 "thread": "nvmf_tgt_poll_group_000", 00:18:32.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:32.094 "listen_address": { 00:18:32.094 "trtype": "TCP", 00:18:32.094 "adrfam": "IPv4", 00:18:32.094 "traddr": "10.0.0.2", 00:18:32.094 "trsvcid": "4420" 00:18:32.094 }, 00:18:32.094 "peer_address": { 00:18:32.094 "trtype": "TCP", 00:18:32.094 "adrfam": "IPv4", 00:18:32.094 "traddr": "10.0.0.1", 00:18:32.094 "trsvcid": "44676" 00:18:32.094 }, 00:18:32.094 "auth": { 00:18:32.094 "state": "completed", 00:18:32.094 "digest": "sha256", 00:18:32.094 "dhgroup": "ffdhe2048" 00:18:32.094 } 00:18:32.094 } 00:18:32.094 ]' 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.094 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.354 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:32.354 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.295 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.556 00:18:33.556 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.556 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.556 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.817 { 00:18:33.817 "cntlid": 11, 00:18:33.817 "qid": 0, 00:18:33.817 "state": "enabled", 00:18:33.817 "thread": "nvmf_tgt_poll_group_000", 00:18:33.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:33.817 "listen_address": { 00:18:33.817 "trtype": "TCP", 00:18:33.817 "adrfam": "IPv4", 00:18:33.817 "traddr": "10.0.0.2", 00:18:33.817 "trsvcid": "4420" 00:18:33.817 }, 00:18:33.817 "peer_address": { 00:18:33.817 "trtype": "TCP", 00:18:33.817 "adrfam": "IPv4", 00:18:33.817 "traddr": "10.0.0.1", 00:18:33.817 "trsvcid": "44702" 00:18:33.817 }, 00:18:33.817 "auth": { 00:18:33.817 "state": "completed", 00:18:33.817 "digest": "sha256", 00:18:33.817 "dhgroup": "ffdhe2048" 00:18:33.817 } 00:18:33.817 } 00:18:33.817 ]' 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.817 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.078 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:18:34.078 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:18:34.649 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.649 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:34.649 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.649 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.649 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.649 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.649 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.649 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.910 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.911 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.171 00:18:35.171 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.171 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.171 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.171 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.171 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.171 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:35.171 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.171 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:35.171 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.171 { 00:18:35.171 "cntlid": 13, 00:18:35.171 "qid": 0, 00:18:35.171 "state": "enabled", 00:18:35.171 "thread": "nvmf_tgt_poll_group_000", 00:18:35.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:35.171 "listen_address": { 00:18:35.171 "trtype": "TCP", 00:18:35.171 "adrfam": "IPv4", 00:18:35.171 "traddr": "10.0.0.2", 00:18:35.171 "trsvcid": "4420" 00:18:35.171 }, 00:18:35.171 "peer_address": { 00:18:35.171 "trtype": "TCP", 00:18:35.171 "adrfam": "IPv4", 00:18:35.171 "traddr": "10.0.0.1", 00:18:35.171 "trsvcid": "42478" 00:18:35.171 }, 00:18:35.171 "auth": { 00:18:35.171 "state": "completed", 00:18:35.171 "digest": "sha256", 00:18:35.171 "dhgroup": "ffdhe2048" 00:18:35.171 } 00:18:35.171 } 00:18:35.171 ]' 00:18:35.171 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.431 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.431 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.431 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.431 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.431 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.431 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.431 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.692 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:18:35.692 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:18:36.264 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.264 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:36.264 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:36.264 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.264 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:36.264 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.264 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:36.264 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:36.525 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:36.525 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.525 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.525 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:36.525 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.525 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.525 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:36.525 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:36.525 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.526 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:36.526 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.526 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.526 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.787 00:18:36.787 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.787 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.787 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.787 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.787 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.787 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:36.787 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.787 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:36.787 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.787 { 00:18:36.787 "cntlid": 15, 00:18:36.787 "qid": 0, 00:18:36.787 "state": "enabled", 00:18:36.787 "thread": "nvmf_tgt_poll_group_000", 00:18:36.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:36.787 "listen_address": { 00:18:36.787 "trtype": "TCP", 00:18:36.787 "adrfam": "IPv4", 00:18:36.787 "traddr": "10.0.0.2", 00:18:36.787 "trsvcid": "4420" 00:18:36.787 }, 00:18:36.787 "peer_address": { 00:18:36.787 "trtype": "TCP", 00:18:36.787 "adrfam": "IPv4", 00:18:36.787 "traddr": "10.0.0.1", 00:18:36.787 "trsvcid": "42496" 00:18:36.787 }, 00:18:36.787 "auth": { 00:18:36.787 "state": "completed", 00:18:36.787 "digest": "sha256", 00:18:36.787 "dhgroup": "ffdhe2048" 00:18:36.787 } 00:18:36.787 } 00:18:36.787 ]' 00:18:36.787 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.048 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.048 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.048 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.048 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.048 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.048 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.049 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.049 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:18:37.049 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.991 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.253 00:18:38.253 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.253 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.253 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.514 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.514 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.514 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:38.514 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.514 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:38.514 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.514 { 00:18:38.514 "cntlid": 17, 00:18:38.514 "qid": 0, 00:18:38.514 "state": "enabled", 00:18:38.514 "thread": "nvmf_tgt_poll_group_000", 00:18:38.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:38.514 "listen_address": { 00:18:38.514 "trtype": "TCP", 00:18:38.514 "adrfam": "IPv4", 00:18:38.514 "traddr": "10.0.0.2", 00:18:38.514 "trsvcid": "4420" 00:18:38.514 }, 00:18:38.514 "peer_address": { 00:18:38.514 "trtype": "TCP", 00:18:38.514 "adrfam": "IPv4", 00:18:38.514 "traddr": "10.0.0.1", 00:18:38.514 "trsvcid": "42524" 00:18:38.514 }, 00:18:38.514 "auth": { 00:18:38.514 "state": "completed", 00:18:38.514 "digest": "sha256", 00:18:38.514 "dhgroup": "ffdhe3072" 00:18:38.514 } 00:18:38.514 } 00:18:38.514 ]' 00:18:38.514 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.514 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.514 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.514 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.514 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.514 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.514 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.514 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.776 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:38.776 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:39.377 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.377 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:39.377 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.377 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.377 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:39.377 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.377 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.377 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.639 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.901 00:18:39.901 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.901 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.901 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.901 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.901 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.901 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.901 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.162 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:40.162 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.162 { 00:18:40.162 "cntlid": 19, 00:18:40.162 "qid": 0, 00:18:40.162 "state": "enabled", 00:18:40.162 "thread": "nvmf_tgt_poll_group_000", 00:18:40.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:40.162 "listen_address": { 00:18:40.162 "trtype": "TCP", 00:18:40.162 "adrfam": "IPv4", 00:18:40.162 "traddr": "10.0.0.2", 00:18:40.162 "trsvcid": "4420" 00:18:40.162 }, 00:18:40.162 "peer_address": { 00:18:40.162 "trtype": "TCP", 00:18:40.162 "adrfam": "IPv4", 00:18:40.162 "traddr": "10.0.0.1", 00:18:40.162 "trsvcid": "42548" 00:18:40.162 }, 00:18:40.162 "auth": { 00:18:40.162 "state": "completed", 00:18:40.162 "digest": "sha256", 00:18:40.162 "dhgroup": "ffdhe3072" 00:18:40.162 } 00:18:40.162 } 00:18:40.162 ]' 00:18:40.162 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.162 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.162 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.162 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.162 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.162 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.162 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.162 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.424 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:18:40.424 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:18:40.996 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.996 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:40.996 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:40.996 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.996 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:40.996 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.996 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:40.996 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.257 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.519 00:18:41.519 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.519 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.519 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.781 { 00:18:41.781 "cntlid": 21, 00:18:41.781 "qid": 0, 00:18:41.781 "state": "enabled", 00:18:41.781 "thread": "nvmf_tgt_poll_group_000", 00:18:41.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:41.781 "listen_address": { 00:18:41.781 "trtype": "TCP", 00:18:41.781 "adrfam": "IPv4", 00:18:41.781 "traddr": "10.0.0.2", 00:18:41.781 "trsvcid": "4420" 00:18:41.781 }, 00:18:41.781 "peer_address": { 00:18:41.781 "trtype": "TCP", 00:18:41.781 "adrfam": "IPv4", 00:18:41.781 "traddr": "10.0.0.1", 00:18:41.781 "trsvcid": "42568" 00:18:41.781 }, 00:18:41.781 "auth": { 00:18:41.781 "state": "completed", 00:18:41.781 "digest": "sha256", 00:18:41.781 "dhgroup": "ffdhe3072" 00:18:41.781 } 00:18:41.781 } 00:18:41.781 ]' 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.781 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.042 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:18:42.042 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:18:42.615 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.615 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:42.615 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:42.615 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.615 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:42.615 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.615 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:42.615 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.877 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.138 00:18:43.138 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.138 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.138 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.400 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.400 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.400 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:43.400 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.400 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:43.400 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.400 { 00:18:43.400 "cntlid": 23, 00:18:43.400 "qid": 0, 00:18:43.400 "state": "enabled", 00:18:43.400 "thread": "nvmf_tgt_poll_group_000", 00:18:43.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:43.400 "listen_address": { 00:18:43.400 "trtype": "TCP", 00:18:43.400 "adrfam": "IPv4", 00:18:43.400 "traddr": "10.0.0.2", 00:18:43.400 "trsvcid": "4420" 00:18:43.400 }, 00:18:43.400 "peer_address": { 00:18:43.400 "trtype": "TCP", 00:18:43.400 "adrfam": "IPv4", 00:18:43.400 "traddr": "10.0.0.1", 00:18:43.400 "trsvcid": "42596" 00:18:43.400 }, 00:18:43.400 "auth": { 00:18:43.400 "state": "completed", 00:18:43.400 "digest": "sha256", 00:18:43.400 "dhgroup": "ffdhe3072" 00:18:43.400 } 00:18:43.400 } 00:18:43.400 ]' 00:18:43.400 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.400 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.400 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.400 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.400 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.400 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.400 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.400 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.661 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:18:43.661 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:18:44.232 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.232 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:44.232 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:44.232 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.232 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:44.232 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.232 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.233 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:44.233 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.494 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.756 00:18:44.756 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.756 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.756 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.018 { 00:18:45.018 "cntlid": 25, 00:18:45.018 "qid": 0, 00:18:45.018 "state": "enabled", 00:18:45.018 "thread": "nvmf_tgt_poll_group_000", 00:18:45.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:45.018 "listen_address": { 00:18:45.018 "trtype": "TCP", 00:18:45.018 "adrfam": "IPv4", 00:18:45.018 "traddr": "10.0.0.2", 00:18:45.018 "trsvcid": "4420" 00:18:45.018 }, 00:18:45.018 "peer_address": { 00:18:45.018 "trtype": "TCP", 00:18:45.018 "adrfam": "IPv4", 00:18:45.018 "traddr": "10.0.0.1", 00:18:45.018 "trsvcid": "42616" 00:18:45.018 }, 00:18:45.018 "auth": { 00:18:45.018 "state": "completed", 00:18:45.018 "digest": "sha256", 00:18:45.018 "dhgroup": "ffdhe4096" 00:18:45.018 } 00:18:45.018 } 00:18:45.018 ]' 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.018 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.280 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:45.280 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:45.852 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.852 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:45.852 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.852 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.852 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.852 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.852 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:45.852 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.114 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.376 00:18:46.376 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.376 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.376 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.638 { 00:18:46.638 "cntlid": 27, 00:18:46.638 "qid": 0, 00:18:46.638 "state": "enabled", 00:18:46.638 "thread": "nvmf_tgt_poll_group_000", 00:18:46.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:46.638 "listen_address": { 00:18:46.638 "trtype": "TCP", 00:18:46.638 "adrfam": "IPv4", 00:18:46.638 "traddr": "10.0.0.2", 00:18:46.638 "trsvcid": "4420" 00:18:46.638 }, 00:18:46.638 "peer_address": { 00:18:46.638 "trtype": "TCP", 00:18:46.638 "adrfam": "IPv4", 00:18:46.638 "traddr": "10.0.0.1", 00:18:46.638 "trsvcid": "43746" 00:18:46.638 }, 00:18:46.638 "auth": { 00:18:46.638 "state": "completed", 00:18:46.638 "digest": "sha256", 00:18:46.638 "dhgroup": "ffdhe4096" 00:18:46.638 } 00:18:46.638 } 00:18:46.638 ]' 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.638 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.899 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.899 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.899 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.899 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:18:46.899 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.844 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.106 00:18:48.106 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.106 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.106 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.367 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.367 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.367 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:48.367 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.367 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:48.367 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.367 { 00:18:48.367 "cntlid": 29, 00:18:48.367 "qid": 0, 00:18:48.368 "state": "enabled", 00:18:48.368 "thread": "nvmf_tgt_poll_group_000", 00:18:48.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:48.368 "listen_address": { 00:18:48.368 "trtype": "TCP", 00:18:48.368 "adrfam": "IPv4", 00:18:48.368 "traddr": "10.0.0.2", 00:18:48.368 "trsvcid": "4420" 00:18:48.368 }, 00:18:48.368 "peer_address": { 00:18:48.368 "trtype": "TCP", 00:18:48.368 "adrfam": "IPv4", 00:18:48.368 "traddr": "10.0.0.1", 00:18:48.368 "trsvcid": "43760" 00:18:48.368 }, 00:18:48.368 "auth": { 00:18:48.368 "state": "completed", 00:18:48.368 "digest": "sha256", 00:18:48.368 "dhgroup": "ffdhe4096" 00:18:48.368 } 00:18:48.368 } 00:18:48.368 ]' 00:18:48.368 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.368 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.368 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.368 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.368 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.368 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.368 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.368 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.629 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:18:48.629 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:18:49.200 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.200 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:49.200 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:49.200 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.200 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:49.200 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.200 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:49.200 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:49.461 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:49.461 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.461 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:49.461 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:49.461 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:49.461 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.461 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:49.461 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:49.461 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.461 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:49.461 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:49.461 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.462 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.723 00:18:49.723 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.723 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.723 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.019 { 00:18:50.019 "cntlid": 31, 00:18:50.019 "qid": 0, 00:18:50.019 "state": "enabled", 00:18:50.019 "thread": "nvmf_tgt_poll_group_000", 00:18:50.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:50.019 "listen_address": { 00:18:50.019 "trtype": "TCP", 00:18:50.019 "adrfam": "IPv4", 00:18:50.019 "traddr": "10.0.0.2", 00:18:50.019 "trsvcid": "4420" 00:18:50.019 }, 00:18:50.019 "peer_address": { 00:18:50.019 "trtype": "TCP", 00:18:50.019 "adrfam": "IPv4", 00:18:50.019 "traddr": "10.0.0.1", 00:18:50.019 "trsvcid": "43774" 00:18:50.019 }, 00:18:50.019 "auth": { 00:18:50.019 "state": "completed", 00:18:50.019 "digest": "sha256", 00:18:50.019 "dhgroup": "ffdhe4096" 00:18:50.019 } 00:18:50.019 } 00:18:50.019 ]' 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.019 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.281 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:18:50.281 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:18:50.853 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.853 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:50.853 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:50.853 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.853 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:50.853 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.853 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.853 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:50.853 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.116 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.376 00:18:51.376 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.376 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.376 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.637 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.637 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.637 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:51.637 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.637 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:51.637 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.637 { 00:18:51.637 "cntlid": 33, 00:18:51.637 "qid": 0, 00:18:51.637 "state": "enabled", 00:18:51.637 "thread": "nvmf_tgt_poll_group_000", 00:18:51.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:51.638 "listen_address": { 00:18:51.638 "trtype": "TCP", 00:18:51.638 "adrfam": "IPv4", 00:18:51.638 "traddr": "10.0.0.2", 00:18:51.638 "trsvcid": "4420" 00:18:51.638 }, 00:18:51.638 "peer_address": { 00:18:51.638 "trtype": "TCP", 00:18:51.638 "adrfam": "IPv4", 00:18:51.638 "traddr": "10.0.0.1", 00:18:51.638 "trsvcid": "43806" 00:18:51.638 }, 00:18:51.638 "auth": { 00:18:51.638 "state": "completed", 00:18:51.638 "digest": "sha256", 00:18:51.638 "dhgroup": "ffdhe6144" 00:18:51.638 } 00:18:51.638 } 00:18:51.638 ]' 00:18:51.638 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.638 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.638 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.638 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.638 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.638 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.638 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.638 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.898 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:51.898 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:52.470 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.470 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:52.470 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:52.470 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.470 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:52.470 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.470 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:52.470 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.731 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.991 00:18:53.252 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.252 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.253 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.253 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.253 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.253 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:53.253 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.253 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:53.253 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.253 { 00:18:53.253 "cntlid": 35, 00:18:53.253 "qid": 0, 00:18:53.253 "state": "enabled", 00:18:53.253 "thread": "nvmf_tgt_poll_group_000", 00:18:53.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:53.253 "listen_address": { 00:18:53.253 "trtype": "TCP", 00:18:53.253 "adrfam": "IPv4", 00:18:53.253 "traddr": "10.0.0.2", 00:18:53.253 "trsvcid": "4420" 00:18:53.253 }, 00:18:53.253 "peer_address": { 00:18:53.253 "trtype": "TCP", 00:18:53.253 "adrfam": "IPv4", 00:18:53.253 "traddr": "10.0.0.1", 00:18:53.253 "trsvcid": "43842" 00:18:53.253 }, 00:18:53.253 "auth": { 00:18:53.253 "state": "completed", 00:18:53.253 "digest": "sha256", 00:18:53.253 "dhgroup": "ffdhe6144" 00:18:53.253 } 00:18:53.253 } 00:18:53.253 ]' 00:18:53.253 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.253 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.253 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.515 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.515 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.515 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.515 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.515 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.515 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:18:53.515 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:18:54.459 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.459 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:54.459 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.459 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.459 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.459 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.459 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:54.459 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.459 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.720 00:18:54.720 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.720 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.720 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.980 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.980 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.980 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.980 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.980 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.980 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.980 { 00:18:54.980 "cntlid": 37, 00:18:54.980 "qid": 0, 00:18:54.980 "state": "enabled", 00:18:54.980 "thread": "nvmf_tgt_poll_group_000", 00:18:54.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:54.980 "listen_address": { 00:18:54.980 "trtype": "TCP", 00:18:54.980 "adrfam": "IPv4", 00:18:54.980 "traddr": "10.0.0.2", 00:18:54.980 "trsvcid": "4420" 00:18:54.980 }, 00:18:54.980 "peer_address": { 00:18:54.980 "trtype": "TCP", 00:18:54.980 "adrfam": "IPv4", 00:18:54.980 "traddr": "10.0.0.1", 00:18:54.980 "trsvcid": "43884" 00:18:54.980 }, 00:18:54.980 "auth": { 00:18:54.980 "state": "completed", 00:18:54.980 "digest": "sha256", 00:18:54.980 "dhgroup": "ffdhe6144" 00:18:54.980 } 00:18:54.980 } 00:18:54.980 ]' 00:18:54.980 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.980 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.980 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.243 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.243 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.243 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.243 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.243 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.243 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:18:55.243 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.188 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.449 00:18:56.449 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.449 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.449 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.711 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.711 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.711 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:56.711 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.711 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:56.711 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.711 { 00:18:56.711 "cntlid": 39, 00:18:56.711 "qid": 0, 00:18:56.711 "state": "enabled", 00:18:56.711 "thread": "nvmf_tgt_poll_group_000", 00:18:56.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:56.711 "listen_address": { 00:18:56.711 "trtype": "TCP", 00:18:56.711 "adrfam": "IPv4", 00:18:56.711 "traddr": "10.0.0.2", 00:18:56.711 "trsvcid": "4420" 00:18:56.711 }, 00:18:56.711 "peer_address": { 00:18:56.711 "trtype": "TCP", 00:18:56.711 "adrfam": "IPv4", 00:18:56.711 "traddr": "10.0.0.1", 00:18:56.711 "trsvcid": "46038" 00:18:56.711 }, 00:18:56.711 "auth": { 00:18:56.711 "state": "completed", 00:18:56.711 "digest": "sha256", 00:18:56.711 "dhgroup": "ffdhe6144" 00:18:56.711 } 00:18:56.711 } 00:18:56.711 ]' 00:18:56.711 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.711 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.711 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.972 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.972 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.972 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.972 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.972 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.972 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:18:56.972 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.915 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.488 00:18:58.488 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.488 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.488 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.488 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.488 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.488 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:58.488 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.488 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:58.488 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.488 { 00:18:58.488 "cntlid": 41, 00:18:58.488 "qid": 0, 00:18:58.488 "state": "enabled", 00:18:58.488 "thread": "nvmf_tgt_poll_group_000", 00:18:58.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:58.488 "listen_address": { 00:18:58.488 "trtype": "TCP", 00:18:58.488 "adrfam": "IPv4", 00:18:58.488 "traddr": "10.0.0.2", 00:18:58.488 "trsvcid": "4420" 00:18:58.488 }, 00:18:58.488 "peer_address": { 00:18:58.488 "trtype": "TCP", 00:18:58.488 "adrfam": "IPv4", 00:18:58.488 "traddr": "10.0.0.1", 00:18:58.488 "trsvcid": "46054" 00:18:58.488 }, 00:18:58.488 "auth": { 00:18:58.488 "state": "completed", 00:18:58.488 "digest": "sha256", 00:18:58.488 "dhgroup": "ffdhe8192" 00:18:58.488 } 00:18:58.488 } 00:18:58.488 ]' 00:18:58.488 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.750 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.750 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.750 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.750 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.750 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.750 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.750 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.010 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:59.010 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:18:59.582 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.582 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:59.582 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:59.582 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.582 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:59.582 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.582 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:59.582 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.885 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.260 00:19:00.260 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.261 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.261 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.594 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.594 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.594 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.594 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.594 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.594 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.594 { 00:19:00.594 "cntlid": 43, 00:19:00.594 "qid": 0, 00:19:00.594 "state": "enabled", 00:19:00.594 "thread": "nvmf_tgt_poll_group_000", 00:19:00.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:00.594 "listen_address": { 00:19:00.594 "trtype": "TCP", 00:19:00.594 "adrfam": "IPv4", 00:19:00.594 "traddr": "10.0.0.2", 00:19:00.594 "trsvcid": "4420" 00:19:00.594 }, 00:19:00.594 "peer_address": { 00:19:00.594 "trtype": "TCP", 00:19:00.594 "adrfam": "IPv4", 00:19:00.594 "traddr": "10.0.0.1", 00:19:00.594 "trsvcid": "46086" 00:19:00.594 }, 00:19:00.594 "auth": { 00:19:00.594 "state": "completed", 00:19:00.594 "digest": "sha256", 00:19:00.594 "dhgroup": "ffdhe8192" 00:19:00.594 } 00:19:00.594 } 00:19:00.594 ]' 00:19:00.594 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.594 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.594 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.594 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.594 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.594 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.594 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.594 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.889 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:00.889 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:01.462 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.462 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:01.462 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:01.462 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.462 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:01.462 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.462 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:01.462 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:01.721 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:01.721 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.721 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.721 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:01.721 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:01.721 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.722 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.722 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:01.722 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.722 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:01.722 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.722 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.722 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.981 00:19:01.981 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.981 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.981 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.241 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.241 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.241 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:02.241 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.241 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:02.241 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.241 { 00:19:02.241 "cntlid": 45, 00:19:02.241 "qid": 0, 00:19:02.241 "state": "enabled", 00:19:02.241 "thread": "nvmf_tgt_poll_group_000", 00:19:02.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:02.241 "listen_address": { 00:19:02.241 "trtype": "TCP", 00:19:02.241 "adrfam": "IPv4", 00:19:02.241 "traddr": "10.0.0.2", 00:19:02.241 "trsvcid": "4420" 00:19:02.241 }, 00:19:02.241 "peer_address": { 00:19:02.241 "trtype": "TCP", 00:19:02.241 "adrfam": "IPv4", 00:19:02.241 "traddr": "10.0.0.1", 00:19:02.241 "trsvcid": "46120" 00:19:02.241 }, 00:19:02.241 "auth": { 00:19:02.241 "state": "completed", 00:19:02.241 "digest": "sha256", 00:19:02.241 "dhgroup": "ffdhe8192" 00:19:02.241 } 00:19:02.241 } 00:19:02.241 ]' 00:19:02.241 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.241 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.241 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.502 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.502 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.502 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.502 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.502 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.502 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:02.502 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:03.442 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.443 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:03.443 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:03.443 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.443 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:03.443 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:03.443 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.443 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.014 00:19:04.014 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.014 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.014 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.014 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.014 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.014 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:04.014 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.014 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:04.014 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.014 { 00:19:04.014 "cntlid": 47, 00:19:04.014 "qid": 0, 00:19:04.014 "state": "enabled", 00:19:04.014 "thread": "nvmf_tgt_poll_group_000", 00:19:04.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:04.014 "listen_address": { 00:19:04.014 "trtype": "TCP", 00:19:04.014 "adrfam": "IPv4", 00:19:04.014 "traddr": "10.0.0.2", 00:19:04.014 "trsvcid": "4420" 00:19:04.014 }, 00:19:04.014 "peer_address": { 00:19:04.014 "trtype": "TCP", 00:19:04.014 "adrfam": "IPv4", 00:19:04.014 "traddr": "10.0.0.1", 00:19:04.014 "trsvcid": "46138" 00:19:04.014 }, 00:19:04.014 "auth": { 00:19:04.014 "state": "completed", 00:19:04.014 "digest": "sha256", 00:19:04.015 "dhgroup": "ffdhe8192" 00:19:04.015 } 00:19:04.015 } 00:19:04.015 ]' 00:19:04.015 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.275 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.275 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.275 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.275 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.275 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.275 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.275 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.535 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:04.535 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:05.107 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.107 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:05.107 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.107 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.108 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.108 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:05.108 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.108 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.108 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:05.108 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.368 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.630 00:19:05.630 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.630 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.630 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.630 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.630 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.630 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.630 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.630 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.630 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.630 { 00:19:05.630 "cntlid": 49, 00:19:05.630 "qid": 0, 00:19:05.630 "state": "enabled", 00:19:05.630 "thread": "nvmf_tgt_poll_group_000", 00:19:05.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:05.630 "listen_address": { 00:19:05.630 "trtype": "TCP", 00:19:05.630 "adrfam": "IPv4", 00:19:05.630 "traddr": "10.0.0.2", 00:19:05.630 "trsvcid": "4420" 00:19:05.630 }, 00:19:05.630 "peer_address": { 00:19:05.630 "trtype": "TCP", 00:19:05.630 "adrfam": "IPv4", 00:19:05.630 "traddr": "10.0.0.1", 00:19:05.630 "trsvcid": "50700" 00:19:05.630 }, 00:19:05.630 "auth": { 00:19:05.630 "state": "completed", 00:19:05.630 "digest": "sha384", 00:19:05.630 "dhgroup": "null" 00:19:05.630 } 00:19:05.630 } 00:19:05.630 ]' 00:19:05.630 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.891 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.891 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.891 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:05.891 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.891 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.891 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.891 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.152 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:06.152 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:06.723 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.723 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:06.723 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:06.723 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.723 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:06.723 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.723 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:06.723 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.984 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.984 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.245 { 00:19:07.245 "cntlid": 51, 00:19:07.245 "qid": 0, 00:19:07.245 "state": "enabled", 00:19:07.245 "thread": "nvmf_tgt_poll_group_000", 00:19:07.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:07.245 "listen_address": { 00:19:07.245 "trtype": "TCP", 00:19:07.245 "adrfam": "IPv4", 00:19:07.245 "traddr": "10.0.0.2", 00:19:07.245 "trsvcid": "4420" 00:19:07.245 }, 00:19:07.245 "peer_address": { 00:19:07.245 "trtype": "TCP", 00:19:07.245 "adrfam": "IPv4", 00:19:07.245 "traddr": "10.0.0.1", 00:19:07.245 "trsvcid": "50728" 00:19:07.245 }, 00:19:07.245 "auth": { 00:19:07.245 "state": "completed", 00:19:07.245 "digest": "sha384", 00:19:07.245 "dhgroup": "null" 00:19:07.245 } 00:19:07.245 } 00:19:07.245 ]' 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.245 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.507 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.507 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:07.507 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.507 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.507 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.507 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.768 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:07.768 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:08.340 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.340 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:08.340 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:08.340 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.340 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:08.340 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.340 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:08.340 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:08.601 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:08.601 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.602 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.863 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.863 { 00:19:08.863 "cntlid": 53, 00:19:08.863 "qid": 0, 00:19:08.863 "state": "enabled", 00:19:08.863 "thread": "nvmf_tgt_poll_group_000", 00:19:08.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:08.863 "listen_address": { 00:19:08.863 "trtype": "TCP", 00:19:08.863 "adrfam": "IPv4", 00:19:08.863 "traddr": "10.0.0.2", 00:19:08.863 "trsvcid": "4420" 00:19:08.863 }, 00:19:08.863 "peer_address": { 00:19:08.863 "trtype": "TCP", 00:19:08.863 "adrfam": "IPv4", 00:19:08.863 "traddr": "10.0.0.1", 00:19:08.863 "trsvcid": "50758" 00:19:08.863 }, 00:19:08.863 "auth": { 00:19:08.863 "state": "completed", 00:19:08.863 "digest": "sha384", 00:19:08.863 "dhgroup": "null" 00:19:08.863 } 00:19:08.863 } 00:19:08.863 ]' 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.863 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.123 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:09.123 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.123 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.123 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.123 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.124 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:09.124 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.065 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.326 00:19:10.326 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.326 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.326 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.586 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.586 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.586 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:10.586 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.587 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:10.587 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.587 { 00:19:10.587 "cntlid": 55, 00:19:10.587 "qid": 0, 00:19:10.587 "state": "enabled", 00:19:10.587 "thread": "nvmf_tgt_poll_group_000", 00:19:10.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:10.587 "listen_address": { 00:19:10.587 "trtype": "TCP", 00:19:10.587 "adrfam": "IPv4", 00:19:10.587 "traddr": "10.0.0.2", 00:19:10.587 "trsvcid": "4420" 00:19:10.587 }, 00:19:10.587 "peer_address": { 00:19:10.587 "trtype": "TCP", 00:19:10.587 "adrfam": "IPv4", 00:19:10.587 "traddr": "10.0.0.1", 00:19:10.587 "trsvcid": "50786" 00:19:10.587 }, 00:19:10.587 "auth": { 00:19:10.587 "state": "completed", 00:19:10.587 "digest": "sha384", 00:19:10.587 "dhgroup": "null" 00:19:10.587 } 00:19:10.587 } 00:19:10.587 ]' 00:19:10.587 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.587 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.587 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.587 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.587 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.587 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.587 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.587 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.848 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:10.848 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:11.418 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.679 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.680 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.680 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.680 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.680 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.680 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.941 00:19:11.941 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.941 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.941 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.201 { 00:19:12.201 "cntlid": 57, 00:19:12.201 "qid": 0, 00:19:12.201 "state": "enabled", 00:19:12.201 "thread": "nvmf_tgt_poll_group_000", 00:19:12.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:12.201 "listen_address": { 00:19:12.201 "trtype": "TCP", 00:19:12.201 "adrfam": "IPv4", 00:19:12.201 "traddr": "10.0.0.2", 00:19:12.201 "trsvcid": "4420" 00:19:12.201 }, 00:19:12.201 "peer_address": { 00:19:12.201 "trtype": "TCP", 00:19:12.201 "adrfam": "IPv4", 00:19:12.201 "traddr": "10.0.0.1", 00:19:12.201 "trsvcid": "50814" 00:19:12.201 }, 00:19:12.201 "auth": { 00:19:12.201 "state": "completed", 00:19:12.201 "digest": "sha384", 00:19:12.201 "dhgroup": "ffdhe2048" 00:19:12.201 } 00:19:12.201 } 00:19:12.201 ]' 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.201 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.463 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:12.463 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:13.046 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.046 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:13.046 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:13.046 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.046 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:13.046 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.046 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:13.046 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:13.306 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:13.306 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.306 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:13.306 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:13.306 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:13.307 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.307 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.307 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:13.307 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.307 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:13.307 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.307 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.307 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.567 00:19:13.567 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.567 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.567 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.828 { 00:19:13.828 "cntlid": 59, 00:19:13.828 "qid": 0, 00:19:13.828 "state": "enabled", 00:19:13.828 "thread": "nvmf_tgt_poll_group_000", 00:19:13.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:13.828 "listen_address": { 00:19:13.828 "trtype": "TCP", 00:19:13.828 "adrfam": "IPv4", 00:19:13.828 "traddr": "10.0.0.2", 00:19:13.828 "trsvcid": "4420" 00:19:13.828 }, 00:19:13.828 "peer_address": { 00:19:13.828 "trtype": "TCP", 00:19:13.828 "adrfam": "IPv4", 00:19:13.828 "traddr": "10.0.0.1", 00:19:13.828 "trsvcid": "50848" 00:19:13.828 }, 00:19:13.828 "auth": { 00:19:13.828 "state": "completed", 00:19:13.828 "digest": "sha384", 00:19:13.828 "dhgroup": "ffdhe2048" 00:19:13.828 } 00:19:13.828 } 00:19:13.828 ]' 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.828 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.089 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:14.089 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:14.660 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.661 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:14.661 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:14.661 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.661 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:14.661 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.661 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:14.661 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.922 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.182 00:19:15.182 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.182 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.182 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.443 { 00:19:15.443 "cntlid": 61, 00:19:15.443 "qid": 0, 00:19:15.443 "state": "enabled", 00:19:15.443 "thread": "nvmf_tgt_poll_group_000", 00:19:15.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:15.443 "listen_address": { 00:19:15.443 "trtype": "TCP", 00:19:15.443 "adrfam": "IPv4", 00:19:15.443 "traddr": "10.0.0.2", 00:19:15.443 "trsvcid": "4420" 00:19:15.443 }, 00:19:15.443 "peer_address": { 00:19:15.443 "trtype": "TCP", 00:19:15.443 "adrfam": "IPv4", 00:19:15.443 "traddr": "10.0.0.1", 00:19:15.443 "trsvcid": "42100" 00:19:15.443 }, 00:19:15.443 "auth": { 00:19:15.443 "state": "completed", 00:19:15.443 "digest": "sha384", 00:19:15.443 "dhgroup": "ffdhe2048" 00:19:15.443 } 00:19:15.443 } 00:19:15.443 ]' 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.443 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.443 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.443 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.443 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.704 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:15.704 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:16.278 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.278 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:16.278 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:16.278 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.278 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:16.278 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.278 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.278 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.539 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:16.539 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.539 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:16.539 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.539 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:16.539 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.539 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:16.539 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:16.540 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.540 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:16.540 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:16.540 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.540 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.801 00:19:16.801 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.801 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.801 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.062 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.062 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.062 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:17.062 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.062 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:17.062 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.062 { 00:19:17.062 "cntlid": 63, 00:19:17.062 "qid": 0, 00:19:17.062 "state": "enabled", 00:19:17.062 "thread": "nvmf_tgt_poll_group_000", 00:19:17.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:17.062 "listen_address": { 00:19:17.062 "trtype": "TCP", 00:19:17.062 "adrfam": "IPv4", 00:19:17.062 "traddr": "10.0.0.2", 00:19:17.062 "trsvcid": "4420" 00:19:17.062 }, 00:19:17.062 "peer_address": { 00:19:17.062 "trtype": "TCP", 00:19:17.062 "adrfam": "IPv4", 00:19:17.062 "traddr": "10.0.0.1", 00:19:17.062 "trsvcid": "42130" 00:19:17.062 }, 00:19:17.062 "auth": { 00:19:17.062 "state": "completed", 00:19:17.062 "digest": "sha384", 00:19:17.062 "dhgroup": "ffdhe2048" 00:19:17.062 } 00:19:17.062 } 00:19:17.062 ]' 00:19:17.062 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.063 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.063 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.063 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.063 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.063 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.063 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.063 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.323 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:17.323 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:17.895 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.895 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:17.895 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:17.895 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.895 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:17.895 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.895 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.895 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.895 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:18.155 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.156 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.416 00:19:18.416 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.416 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.416 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.677 { 00:19:18.677 "cntlid": 65, 00:19:18.677 "qid": 0, 00:19:18.677 "state": "enabled", 00:19:18.677 "thread": "nvmf_tgt_poll_group_000", 00:19:18.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:18.677 "listen_address": { 00:19:18.677 "trtype": "TCP", 00:19:18.677 "adrfam": "IPv4", 00:19:18.677 "traddr": "10.0.0.2", 00:19:18.677 "trsvcid": "4420" 00:19:18.677 }, 00:19:18.677 "peer_address": { 00:19:18.677 "trtype": "TCP", 00:19:18.677 "adrfam": "IPv4", 00:19:18.677 "traddr": "10.0.0.1", 00:19:18.677 "trsvcid": "42144" 00:19:18.677 }, 00:19:18.677 "auth": { 00:19:18.677 "state": "completed", 00:19:18.677 "digest": "sha384", 00:19:18.677 "dhgroup": "ffdhe3072" 00:19:18.677 } 00:19:18.677 } 00:19:18.677 ]' 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.677 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.938 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:18.938 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:19.508 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:19.770 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:19.771 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.771 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.771 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:19.771 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.771 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:19.771 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.771 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.771 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.032 00:19:20.032 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.032 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.032 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.293 { 00:19:20.293 "cntlid": 67, 00:19:20.293 "qid": 0, 00:19:20.293 "state": "enabled", 00:19:20.293 "thread": "nvmf_tgt_poll_group_000", 00:19:20.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:20.293 "listen_address": { 00:19:20.293 "trtype": "TCP", 00:19:20.293 "adrfam": "IPv4", 00:19:20.293 "traddr": "10.0.0.2", 00:19:20.293 "trsvcid": "4420" 00:19:20.293 }, 00:19:20.293 "peer_address": { 00:19:20.293 "trtype": "TCP", 00:19:20.293 "adrfam": "IPv4", 00:19:20.293 "traddr": "10.0.0.1", 00:19:20.293 "trsvcid": "42164" 00:19:20.293 }, 00:19:20.293 "auth": { 00:19:20.293 "state": "completed", 00:19:20.293 "digest": "sha384", 00:19:20.293 "dhgroup": "ffdhe3072" 00:19:20.293 } 00:19:20.293 } 00:19:20.293 ]' 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.293 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.553 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.553 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.553 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.553 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:20.553 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:21.495 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.495 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:21.495 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:21.495 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.495 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:21.495 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.495 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:21.495 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.495 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.757 00:19:21.757 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.757 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.757 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.019 { 00:19:22.019 "cntlid": 69, 00:19:22.019 "qid": 0, 00:19:22.019 "state": "enabled", 00:19:22.019 "thread": "nvmf_tgt_poll_group_000", 00:19:22.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:22.019 "listen_address": { 00:19:22.019 "trtype": "TCP", 00:19:22.019 "adrfam": "IPv4", 00:19:22.019 "traddr": "10.0.0.2", 00:19:22.019 "trsvcid": "4420" 00:19:22.019 }, 00:19:22.019 "peer_address": { 00:19:22.019 "trtype": "TCP", 00:19:22.019 "adrfam": "IPv4", 00:19:22.019 "traddr": "10.0.0.1", 00:19:22.019 "trsvcid": "42184" 00:19:22.019 }, 00:19:22.019 "auth": { 00:19:22.019 "state": "completed", 00:19:22.019 "digest": "sha384", 00:19:22.019 "dhgroup": "ffdhe3072" 00:19:22.019 } 00:19:22.019 } 00:19:22.019 ]' 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.019 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.281 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:22.281 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:22.852 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.852 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:22.852 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:22.852 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.852 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:22.852 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.852 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:22.852 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.114 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.375 00:19:23.375 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.375 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.375 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.636 { 00:19:23.636 "cntlid": 71, 00:19:23.636 "qid": 0, 00:19:23.636 "state": "enabled", 00:19:23.636 "thread": "nvmf_tgt_poll_group_000", 00:19:23.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:23.636 "listen_address": { 00:19:23.636 "trtype": "TCP", 00:19:23.636 "adrfam": "IPv4", 00:19:23.636 "traddr": "10.0.0.2", 00:19:23.636 "trsvcid": "4420" 00:19:23.636 }, 00:19:23.636 "peer_address": { 00:19:23.636 "trtype": "TCP", 00:19:23.636 "adrfam": "IPv4", 00:19:23.636 "traddr": "10.0.0.1", 00:19:23.636 "trsvcid": "42220" 00:19:23.636 }, 00:19:23.636 "auth": { 00:19:23.636 "state": "completed", 00:19:23.636 "digest": "sha384", 00:19:23.636 "dhgroup": "ffdhe3072" 00:19:23.636 } 00:19:23.636 } 00:19:23.636 ]' 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.636 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.897 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:23.897 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:24.469 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.469 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:24.470 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:24.470 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.470 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:24.470 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.470 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.470 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:24.470 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.730 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.991 00:19:24.991 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.991 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.991 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.252 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.252 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.252 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.252 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.252 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.252 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.252 { 00:19:25.252 "cntlid": 73, 00:19:25.252 "qid": 0, 00:19:25.252 "state": "enabled", 00:19:25.252 "thread": "nvmf_tgt_poll_group_000", 00:19:25.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:25.252 "listen_address": { 00:19:25.252 "trtype": "TCP", 00:19:25.252 "adrfam": "IPv4", 00:19:25.252 "traddr": "10.0.0.2", 00:19:25.252 "trsvcid": "4420" 00:19:25.252 }, 00:19:25.253 "peer_address": { 00:19:25.253 "trtype": "TCP", 00:19:25.253 "adrfam": "IPv4", 00:19:25.253 "traddr": "10.0.0.1", 00:19:25.253 "trsvcid": "42254" 00:19:25.253 }, 00:19:25.253 "auth": { 00:19:25.253 "state": "completed", 00:19:25.253 "digest": "sha384", 00:19:25.253 "dhgroup": "ffdhe4096" 00:19:25.253 } 00:19:25.253 } 00:19:25.253 ]' 00:19:25.253 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.253 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.253 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.253 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.253 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.253 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.253 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.253 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.514 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:25.514 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:26.084 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.084 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:26.084 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.084 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.084 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:26.084 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.084 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.084 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.345 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.606 00:19:26.607 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.607 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.607 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.867 { 00:19:26.867 "cntlid": 75, 00:19:26.867 "qid": 0, 00:19:26.867 "state": "enabled", 00:19:26.867 "thread": "nvmf_tgt_poll_group_000", 00:19:26.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:26.867 "listen_address": { 00:19:26.867 "trtype": "TCP", 00:19:26.867 "adrfam": "IPv4", 00:19:26.867 "traddr": "10.0.0.2", 00:19:26.867 "trsvcid": "4420" 00:19:26.867 }, 00:19:26.867 "peer_address": { 00:19:26.867 "trtype": "TCP", 00:19:26.867 "adrfam": "IPv4", 00:19:26.867 "traddr": "10.0.0.1", 00:19:26.867 "trsvcid": "42324" 00:19:26.867 }, 00:19:26.867 "auth": { 00:19:26.867 "state": "completed", 00:19:26.867 "digest": "sha384", 00:19:26.867 "dhgroup": "ffdhe4096" 00:19:26.867 } 00:19:26.867 } 00:19:26.867 ]' 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.867 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.128 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:27.128 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:27.806 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.806 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:27.806 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:27.806 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.806 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:27.806 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.806 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.806 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.067 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.329 00:19:28.329 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.329 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.329 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.329 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.329 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.329 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:28.329 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.329 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:28.329 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.329 { 00:19:28.329 "cntlid": 77, 00:19:28.329 "qid": 0, 00:19:28.329 "state": "enabled", 00:19:28.329 "thread": "nvmf_tgt_poll_group_000", 00:19:28.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:28.329 "listen_address": { 00:19:28.329 "trtype": "TCP", 00:19:28.329 "adrfam": "IPv4", 00:19:28.329 "traddr": "10.0.0.2", 00:19:28.329 "trsvcid": "4420" 00:19:28.329 }, 00:19:28.329 "peer_address": { 00:19:28.329 "trtype": "TCP", 00:19:28.329 "adrfam": "IPv4", 00:19:28.329 "traddr": "10.0.0.1", 00:19:28.329 "trsvcid": "42350" 00:19:28.329 }, 00:19:28.329 "auth": { 00:19:28.329 "state": "completed", 00:19:28.329 "digest": "sha384", 00:19:28.329 "dhgroup": "ffdhe4096" 00:19:28.329 } 00:19:28.329 } 00:19:28.329 ]' 00:19:28.329 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.591 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.591 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.591 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.591 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.591 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.591 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.591 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.852 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:28.852 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:29.422 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.422 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:29.422 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:29.422 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.422 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:29.422 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.422 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:29.422 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.683 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.944 00:19:29.944 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.944 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.944 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.944 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.944 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.944 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:29.944 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.944 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:29.944 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.944 { 00:19:29.944 "cntlid": 79, 00:19:29.944 "qid": 0, 00:19:29.944 "state": "enabled", 00:19:29.944 "thread": "nvmf_tgt_poll_group_000", 00:19:29.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:29.944 "listen_address": { 00:19:29.944 "trtype": "TCP", 00:19:29.944 "adrfam": "IPv4", 00:19:29.944 "traddr": "10.0.0.2", 00:19:29.944 "trsvcid": "4420" 00:19:29.944 }, 00:19:29.944 "peer_address": { 00:19:29.944 "trtype": "TCP", 00:19:29.944 "adrfam": "IPv4", 00:19:29.944 "traddr": "10.0.0.1", 00:19:29.944 "trsvcid": "42364" 00:19:29.944 }, 00:19:29.944 "auth": { 00:19:29.944 "state": "completed", 00:19:29.944 "digest": "sha384", 00:19:29.944 "dhgroup": "ffdhe4096" 00:19:29.944 } 00:19:29.944 } 00:19:29.944 ]' 00:19:29.944 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.205 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.205 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.205 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.205 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.205 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.205 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.205 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.466 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:30.466 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:31.038 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.038 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:31.038 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.038 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.038 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.038 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.038 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.038 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:31.038 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.298 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.560 00:19:31.560 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.560 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.560 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.821 { 00:19:31.821 "cntlid": 81, 00:19:31.821 "qid": 0, 00:19:31.821 "state": "enabled", 00:19:31.821 "thread": "nvmf_tgt_poll_group_000", 00:19:31.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:31.821 "listen_address": { 00:19:31.821 "trtype": "TCP", 00:19:31.821 "adrfam": "IPv4", 00:19:31.821 "traddr": "10.0.0.2", 00:19:31.821 "trsvcid": "4420" 00:19:31.821 }, 00:19:31.821 "peer_address": { 00:19:31.821 "trtype": "TCP", 00:19:31.821 "adrfam": "IPv4", 00:19:31.821 "traddr": "10.0.0.1", 00:19:31.821 "trsvcid": "42392" 00:19:31.821 }, 00:19:31.821 "auth": { 00:19:31.821 "state": "completed", 00:19:31.821 "digest": "sha384", 00:19:31.821 "dhgroup": "ffdhe6144" 00:19:31.821 } 00:19:31.821 } 00:19:31.821 ]' 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.821 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.083 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:32.083 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:32.656 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.656 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:32.656 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:32.656 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.656 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.656 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.656 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.656 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.917 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.177 00:19:33.177 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.177 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.177 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.439 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.439 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.439 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:33.439 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.439 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:33.439 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.439 { 00:19:33.439 "cntlid": 83, 00:19:33.439 "qid": 0, 00:19:33.439 "state": "enabled", 00:19:33.439 "thread": "nvmf_tgt_poll_group_000", 00:19:33.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:33.439 "listen_address": { 00:19:33.439 "trtype": "TCP", 00:19:33.439 "adrfam": "IPv4", 00:19:33.439 "traddr": "10.0.0.2", 00:19:33.439 "trsvcid": "4420" 00:19:33.439 }, 00:19:33.439 "peer_address": { 00:19:33.439 "trtype": "TCP", 00:19:33.439 "adrfam": "IPv4", 00:19:33.439 "traddr": "10.0.0.1", 00:19:33.439 "trsvcid": "42416" 00:19:33.439 }, 00:19:33.439 "auth": { 00:19:33.439 "state": "completed", 00:19:33.439 "digest": "sha384", 00:19:33.439 "dhgroup": "ffdhe6144" 00:19:33.439 } 00:19:33.439 } 00:19:33.439 ]' 00:19:33.439 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.439 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.439 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.439 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.439 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.699 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.699 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.699 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.699 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:33.700 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:34.642 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.642 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:34.642 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:34.642 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.642 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:34.642 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.642 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:34.642 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.642 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.902 00:19:34.903 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.903 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.903 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.164 { 00:19:35.164 "cntlid": 85, 00:19:35.164 "qid": 0, 00:19:35.164 "state": "enabled", 00:19:35.164 "thread": "nvmf_tgt_poll_group_000", 00:19:35.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:35.164 "listen_address": { 00:19:35.164 "trtype": "TCP", 00:19:35.164 "adrfam": "IPv4", 00:19:35.164 "traddr": "10.0.0.2", 00:19:35.164 "trsvcid": "4420" 00:19:35.164 }, 00:19:35.164 "peer_address": { 00:19:35.164 "trtype": "TCP", 00:19:35.164 "adrfam": "IPv4", 00:19:35.164 "traddr": "10.0.0.1", 00:19:35.164 "trsvcid": "42432" 00:19:35.164 }, 00:19:35.164 "auth": { 00:19:35.164 "state": "completed", 00:19:35.164 "digest": "sha384", 00:19:35.164 "dhgroup": "ffdhe6144" 00:19:35.164 } 00:19:35.164 } 00:19:35.164 ]' 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.164 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.426 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.426 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.426 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.426 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:35.426 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.369 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.631 00:19:36.631 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.631 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.631 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.892 { 00:19:36.892 "cntlid": 87, 00:19:36.892 "qid": 0, 00:19:36.892 "state": "enabled", 00:19:36.892 "thread": "nvmf_tgt_poll_group_000", 00:19:36.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:36.892 "listen_address": { 00:19:36.892 "trtype": "TCP", 00:19:36.892 "adrfam": "IPv4", 00:19:36.892 "traddr": "10.0.0.2", 00:19:36.892 "trsvcid": "4420" 00:19:36.892 }, 00:19:36.892 "peer_address": { 00:19:36.892 "trtype": "TCP", 00:19:36.892 "adrfam": "IPv4", 00:19:36.892 "traddr": "10.0.0.1", 00:19:36.892 "trsvcid": "47236" 00:19:36.892 }, 00:19:36.892 "auth": { 00:19:36.892 "state": "completed", 00:19:36.892 "digest": "sha384", 00:19:36.892 "dhgroup": "ffdhe6144" 00:19:36.892 } 00:19:36.892 } 00:19:36.892 ]' 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.892 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.153 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:37.153 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:37.726 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.726 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:37.726 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:37.726 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.726 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:37.726 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.726 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.726 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:37.726 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.988 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.559 00:19:38.559 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.559 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.559 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.559 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.559 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.559 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.559 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.820 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.820 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.820 { 00:19:38.820 "cntlid": 89, 00:19:38.820 "qid": 0, 00:19:38.820 "state": "enabled", 00:19:38.820 "thread": "nvmf_tgt_poll_group_000", 00:19:38.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:38.820 "listen_address": { 00:19:38.820 "trtype": "TCP", 00:19:38.820 "adrfam": "IPv4", 00:19:38.820 "traddr": "10.0.0.2", 00:19:38.820 "trsvcid": "4420" 00:19:38.820 }, 00:19:38.820 "peer_address": { 00:19:38.820 "trtype": "TCP", 00:19:38.820 "adrfam": "IPv4", 00:19:38.820 "traddr": "10.0.0.1", 00:19:38.820 "trsvcid": "47264" 00:19:38.820 }, 00:19:38.820 "auth": { 00:19:38.820 "state": "completed", 00:19:38.820 "digest": "sha384", 00:19:38.820 "dhgroup": "ffdhe8192" 00:19:38.820 } 00:19:38.820 } 00:19:38.820 ]' 00:19:38.820 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.820 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.820 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.820 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.820 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.820 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.820 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.820 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.081 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:39.081 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:39.652 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.652 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:39.652 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:39.652 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.652 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:39.652 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.652 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:39.652 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.913 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.485 00:19:40.485 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.485 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.485 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.485 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.485 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.485 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:40.485 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.485 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:40.485 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.485 { 00:19:40.485 "cntlid": 91, 00:19:40.485 "qid": 0, 00:19:40.485 "state": "enabled", 00:19:40.485 "thread": "nvmf_tgt_poll_group_000", 00:19:40.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:40.485 "listen_address": { 00:19:40.485 "trtype": "TCP", 00:19:40.485 "adrfam": "IPv4", 00:19:40.485 "traddr": "10.0.0.2", 00:19:40.485 "trsvcid": "4420" 00:19:40.485 }, 00:19:40.485 "peer_address": { 00:19:40.485 "trtype": "TCP", 00:19:40.485 "adrfam": "IPv4", 00:19:40.485 "traddr": "10.0.0.1", 00:19:40.485 "trsvcid": "47288" 00:19:40.485 }, 00:19:40.485 "auth": { 00:19:40.485 "state": "completed", 00:19:40.485 "digest": "sha384", 00:19:40.485 "dhgroup": "ffdhe8192" 00:19:40.485 } 00:19:40.485 } 00:19:40.485 ]' 00:19:40.485 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.485 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.485 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.747 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.747 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.747 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.747 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.747 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.747 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:40.747 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.687 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.257 00:19:42.257 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.257 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.257 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.257 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.257 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.257 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:42.257 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.518 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:42.518 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.518 { 00:19:42.518 "cntlid": 93, 00:19:42.518 "qid": 0, 00:19:42.518 "state": "enabled", 00:19:42.518 "thread": "nvmf_tgt_poll_group_000", 00:19:42.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:42.518 "listen_address": { 00:19:42.518 "trtype": "TCP", 00:19:42.518 "adrfam": "IPv4", 00:19:42.518 "traddr": "10.0.0.2", 00:19:42.518 "trsvcid": "4420" 00:19:42.518 }, 00:19:42.518 "peer_address": { 00:19:42.518 "trtype": "TCP", 00:19:42.518 "adrfam": "IPv4", 00:19:42.518 "traddr": "10.0.0.1", 00:19:42.518 "trsvcid": "47304" 00:19:42.518 }, 00:19:42.518 "auth": { 00:19:42.518 "state": "completed", 00:19:42.518 "digest": "sha384", 00:19:42.518 "dhgroup": "ffdhe8192" 00:19:42.518 } 00:19:42.518 } 00:19:42.518 ]' 00:19:42.518 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.518 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.518 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.518 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.518 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.518 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.518 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.518 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.778 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:42.778 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:43.347 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.347 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:43.347 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:43.347 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.347 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:43.347 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.347 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:43.347 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.608 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.181 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.181 { 00:19:44.181 "cntlid": 95, 00:19:44.181 "qid": 0, 00:19:44.181 "state": "enabled", 00:19:44.181 "thread": "nvmf_tgt_poll_group_000", 00:19:44.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:44.181 "listen_address": { 00:19:44.181 "trtype": "TCP", 00:19:44.181 "adrfam": "IPv4", 00:19:44.181 "traddr": "10.0.0.2", 00:19:44.181 "trsvcid": "4420" 00:19:44.181 }, 00:19:44.181 "peer_address": { 00:19:44.181 "trtype": "TCP", 00:19:44.181 "adrfam": "IPv4", 00:19:44.181 "traddr": "10.0.0.1", 00:19:44.181 "trsvcid": "47328" 00:19:44.181 }, 00:19:44.181 "auth": { 00:19:44.181 "state": "completed", 00:19:44.181 "digest": "sha384", 00:19:44.181 "dhgroup": "ffdhe8192" 00:19:44.181 } 00:19:44.181 } 00:19:44.181 ]' 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.181 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.441 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.441 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.441 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.441 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.441 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.441 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:44.442 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.383 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.384 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.645 00:19:45.645 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.645 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.645 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.906 { 00:19:45.906 "cntlid": 97, 00:19:45.906 "qid": 0, 00:19:45.906 "state": "enabled", 00:19:45.906 "thread": "nvmf_tgt_poll_group_000", 00:19:45.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:45.906 "listen_address": { 00:19:45.906 "trtype": "TCP", 00:19:45.906 "adrfam": "IPv4", 00:19:45.906 "traddr": "10.0.0.2", 00:19:45.906 "trsvcid": "4420" 00:19:45.906 }, 00:19:45.906 "peer_address": { 00:19:45.906 "trtype": "TCP", 00:19:45.906 "adrfam": "IPv4", 00:19:45.906 "traddr": "10.0.0.1", 00:19:45.906 "trsvcid": "46388" 00:19:45.906 }, 00:19:45.906 "auth": { 00:19:45.906 "state": "completed", 00:19:45.906 "digest": "sha512", 00:19:45.906 "dhgroup": "null" 00:19:45.906 } 00:19:45.906 } 00:19:45.906 ]' 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.906 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:46.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:46.738 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.738 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:46.738 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:46.738 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.738 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:46.738 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.738 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:46.738 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.998 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.258 00:19:47.258 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.258 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.258 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.518 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.518 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.518 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:47.518 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.518 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:47.518 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.518 { 00:19:47.518 "cntlid": 99, 00:19:47.518 "qid": 0, 00:19:47.518 "state": "enabled", 00:19:47.518 "thread": "nvmf_tgt_poll_group_000", 00:19:47.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:47.518 "listen_address": { 00:19:47.518 "trtype": "TCP", 00:19:47.518 "adrfam": "IPv4", 00:19:47.518 "traddr": "10.0.0.2", 00:19:47.518 "trsvcid": "4420" 00:19:47.518 }, 00:19:47.518 "peer_address": { 00:19:47.518 "trtype": "TCP", 00:19:47.518 "adrfam": "IPv4", 00:19:47.518 "traddr": "10.0.0.1", 00:19:47.518 "trsvcid": "46422" 00:19:47.518 }, 00:19:47.518 "auth": { 00:19:47.518 "state": "completed", 00:19:47.518 "digest": "sha512", 00:19:47.518 "dhgroup": "null" 00:19:47.518 } 00:19:47.518 } 00:19:47.518 ]' 00:19:47.518 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.518 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.518 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.518 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:47.518 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.518 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.518 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.518 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.778 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:47.778 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:48.346 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.346 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:48.346 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:48.346 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.346 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:48.346 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.347 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:48.347 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:48.606 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.607 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.866 00:19:48.866 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.866 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.866 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.126 { 00:19:49.126 "cntlid": 101, 00:19:49.126 "qid": 0, 00:19:49.126 "state": "enabled", 00:19:49.126 "thread": "nvmf_tgt_poll_group_000", 00:19:49.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:49.126 "listen_address": { 00:19:49.126 "trtype": "TCP", 00:19:49.126 "adrfam": "IPv4", 00:19:49.126 "traddr": "10.0.0.2", 00:19:49.126 "trsvcid": "4420" 00:19:49.126 }, 00:19:49.126 "peer_address": { 00:19:49.126 "trtype": "TCP", 00:19:49.126 "adrfam": "IPv4", 00:19:49.126 "traddr": "10.0.0.1", 00:19:49.126 "trsvcid": "46448" 00:19:49.126 }, 00:19:49.126 "auth": { 00:19:49.126 "state": "completed", 00:19:49.126 "digest": "sha512", 00:19:49.126 "dhgroup": "null" 00:19:49.126 } 00:19:49.126 } 00:19:49.126 ]' 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.126 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.387 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:49.387 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:49.958 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.958 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:49.958 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:49.958 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.958 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:49.958 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.958 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.958 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.219 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.480 00:19:50.480 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.480 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.480 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.740 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.740 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.740 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:50.740 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.740 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:50.740 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.740 { 00:19:50.740 "cntlid": 103, 00:19:50.740 "qid": 0, 00:19:50.740 "state": "enabled", 00:19:50.740 "thread": "nvmf_tgt_poll_group_000", 00:19:50.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:50.740 "listen_address": { 00:19:50.740 "trtype": "TCP", 00:19:50.741 "adrfam": "IPv4", 00:19:50.741 "traddr": "10.0.0.2", 00:19:50.741 "trsvcid": "4420" 00:19:50.741 }, 00:19:50.741 "peer_address": { 00:19:50.741 "trtype": "TCP", 00:19:50.741 "adrfam": "IPv4", 00:19:50.741 "traddr": "10.0.0.1", 00:19:50.741 "trsvcid": "46468" 00:19:50.741 }, 00:19:50.741 "auth": { 00:19:50.741 "state": "completed", 00:19:50.741 "digest": "sha512", 00:19:50.741 "dhgroup": "null" 00:19:50.741 } 00:19:50.741 } 00:19:50.741 ]' 00:19:50.741 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.741 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.741 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.741 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:50.741 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.741 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.741 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.741 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.001 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:51.001 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:51.573 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.573 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:51.573 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:51.573 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.573 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:51.573 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.573 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.573 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.573 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.834 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:51.834 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.834 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:51.834 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:51.834 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.834 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.834 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.834 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:51.834 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.835 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:51.835 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.835 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.835 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.095 00:19:52.095 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.095 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.095 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.095 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.095 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.356 { 00:19:52.356 "cntlid": 105, 00:19:52.356 "qid": 0, 00:19:52.356 "state": "enabled", 00:19:52.356 "thread": "nvmf_tgt_poll_group_000", 00:19:52.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:52.356 "listen_address": { 00:19:52.356 "trtype": "TCP", 00:19:52.356 "adrfam": "IPv4", 00:19:52.356 "traddr": "10.0.0.2", 00:19:52.356 "trsvcid": "4420" 00:19:52.356 }, 00:19:52.356 "peer_address": { 00:19:52.356 "trtype": "TCP", 00:19:52.356 "adrfam": "IPv4", 00:19:52.356 "traddr": "10.0.0.1", 00:19:52.356 "trsvcid": "46510" 00:19:52.356 }, 00:19:52.356 "auth": { 00:19:52.356 "state": "completed", 00:19:52.356 "digest": "sha512", 00:19:52.356 "dhgroup": "ffdhe2048" 00:19:52.356 } 00:19:52.356 } 00:19:52.356 ]' 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.356 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.617 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:52.617 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:53.187 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.187 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:53.188 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:53.188 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.188 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:53.188 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.188 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.188 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.447 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.707 00:19:53.707 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.707 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.707 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.967 { 00:19:53.967 "cntlid": 107, 00:19:53.967 "qid": 0, 00:19:53.967 "state": "enabled", 00:19:53.967 "thread": "nvmf_tgt_poll_group_000", 00:19:53.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:53.967 "listen_address": { 00:19:53.967 "trtype": "TCP", 00:19:53.967 "adrfam": "IPv4", 00:19:53.967 "traddr": "10.0.0.2", 00:19:53.967 "trsvcid": "4420" 00:19:53.967 }, 00:19:53.967 "peer_address": { 00:19:53.967 "trtype": "TCP", 00:19:53.967 "adrfam": "IPv4", 00:19:53.967 "traddr": "10.0.0.1", 00:19:53.967 "trsvcid": "46532" 00:19:53.967 }, 00:19:53.967 "auth": { 00:19:53.967 "state": "completed", 00:19:53.967 "digest": "sha512", 00:19:53.967 "dhgroup": "ffdhe2048" 00:19:53.967 } 00:19:53.967 } 00:19:53.967 ]' 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.967 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.227 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:54.227 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:19:54.795 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.795 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:54.795 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:54.795 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.795 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:54.795 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.795 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:54.795 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.055 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.316 00:19:55.316 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.316 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.316 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.576 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.576 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.576 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:55.576 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.576 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:55.576 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.576 { 00:19:55.576 "cntlid": 109, 00:19:55.576 "qid": 0, 00:19:55.576 "state": "enabled", 00:19:55.576 "thread": "nvmf_tgt_poll_group_000", 00:19:55.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:55.576 "listen_address": { 00:19:55.576 "trtype": "TCP", 00:19:55.576 "adrfam": "IPv4", 00:19:55.576 "traddr": "10.0.0.2", 00:19:55.576 "trsvcid": "4420" 00:19:55.576 }, 00:19:55.576 "peer_address": { 00:19:55.576 "trtype": "TCP", 00:19:55.576 "adrfam": "IPv4", 00:19:55.576 "traddr": "10.0.0.1", 00:19:55.576 "trsvcid": "34284" 00:19:55.576 }, 00:19:55.576 "auth": { 00:19:55.576 "state": "completed", 00:19:55.576 "digest": "sha512", 00:19:55.576 "dhgroup": "ffdhe2048" 00:19:55.576 } 00:19:55.576 } 00:19:55.576 ]' 00:19:55.576 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.576 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.576 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.576 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.576 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.576 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.576 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.576 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.836 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:55.836 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:19:56.408 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.408 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:56.408 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.408 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.408 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.408 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.408 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:56.408 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.668 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.928 00:19:56.928 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.928 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.928 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.188 { 00:19:57.188 "cntlid": 111, 00:19:57.188 "qid": 0, 00:19:57.188 "state": "enabled", 00:19:57.188 "thread": "nvmf_tgt_poll_group_000", 00:19:57.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:57.188 "listen_address": { 00:19:57.188 "trtype": "TCP", 00:19:57.188 "adrfam": "IPv4", 00:19:57.188 "traddr": "10.0.0.2", 00:19:57.188 "trsvcid": "4420" 00:19:57.188 }, 00:19:57.188 "peer_address": { 00:19:57.188 "trtype": "TCP", 00:19:57.188 "adrfam": "IPv4", 00:19:57.188 "traddr": "10.0.0.1", 00:19:57.188 "trsvcid": "34306" 00:19:57.188 }, 00:19:57.188 "auth": { 00:19:57.188 "state": "completed", 00:19:57.188 "digest": "sha512", 00:19:57.188 "dhgroup": "ffdhe2048" 00:19:57.188 } 00:19:57.188 } 00:19:57.188 ]' 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.188 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.448 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:57.448 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:19:58.018 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.018 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:58.018 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:58.018 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.018 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:58.018 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.018 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.018 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.018 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.277 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.278 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.537 00:19:58.537 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.537 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.537 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.798 { 00:19:58.798 "cntlid": 113, 00:19:58.798 "qid": 0, 00:19:58.798 "state": "enabled", 00:19:58.798 "thread": "nvmf_tgt_poll_group_000", 00:19:58.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:58.798 "listen_address": { 00:19:58.798 "trtype": "TCP", 00:19:58.798 "adrfam": "IPv4", 00:19:58.798 "traddr": "10.0.0.2", 00:19:58.798 "trsvcid": "4420" 00:19:58.798 }, 00:19:58.798 "peer_address": { 00:19:58.798 "trtype": "TCP", 00:19:58.798 "adrfam": "IPv4", 00:19:58.798 "traddr": "10.0.0.1", 00:19:58.798 "trsvcid": "34326" 00:19:58.798 }, 00:19:58.798 "auth": { 00:19:58.798 "state": "completed", 00:19:58.798 "digest": "sha512", 00:19:58.798 "dhgroup": "ffdhe3072" 00:19:58.798 } 00:19:58.798 } 00:19:58.798 ]' 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.798 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.058 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:59.058 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:19:59.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:59.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:59.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:59.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.887 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.147 00:20:00.147 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.147 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.147 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.406 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.406 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.406 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.406 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.406 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.406 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.406 { 00:20:00.406 "cntlid": 115, 00:20:00.406 "qid": 0, 00:20:00.406 "state": "enabled", 00:20:00.406 "thread": "nvmf_tgt_poll_group_000", 00:20:00.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:00.406 "listen_address": { 00:20:00.406 "trtype": "TCP", 00:20:00.406 "adrfam": "IPv4", 00:20:00.406 "traddr": "10.0.0.2", 00:20:00.406 "trsvcid": "4420" 00:20:00.406 }, 00:20:00.406 "peer_address": { 00:20:00.406 "trtype": "TCP", 00:20:00.406 "adrfam": "IPv4", 00:20:00.406 "traddr": "10.0.0.1", 00:20:00.406 "trsvcid": "34352" 00:20:00.406 }, 00:20:00.406 "auth": { 00:20:00.406 "state": "completed", 00:20:00.406 "digest": "sha512", 00:20:00.406 "dhgroup": "ffdhe3072" 00:20:00.406 } 00:20:00.406 } 00:20:00.406 ]' 00:20:00.406 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.406 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.406 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.407 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.407 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.407 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.407 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.407 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.667 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:20:00.667 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:20:01.238 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.238 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:01.238 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:01.238 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.238 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:01.238 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.238 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:01.238 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.499 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.759 00:20:01.759 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.759 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.759 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.019 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.019 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.019 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:02.019 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.019 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:02.019 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.019 { 00:20:02.019 "cntlid": 117, 00:20:02.019 "qid": 0, 00:20:02.019 "state": "enabled", 00:20:02.019 "thread": "nvmf_tgt_poll_group_000", 00:20:02.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:02.019 "listen_address": { 00:20:02.019 "trtype": "TCP", 00:20:02.019 "adrfam": "IPv4", 00:20:02.019 "traddr": "10.0.0.2", 00:20:02.019 "trsvcid": "4420" 00:20:02.019 }, 00:20:02.019 "peer_address": { 00:20:02.019 "trtype": "TCP", 00:20:02.019 "adrfam": "IPv4", 00:20:02.019 "traddr": "10.0.0.1", 00:20:02.019 "trsvcid": "34382" 00:20:02.019 }, 00:20:02.019 "auth": { 00:20:02.019 "state": "completed", 00:20:02.019 "digest": "sha512", 00:20:02.019 "dhgroup": "ffdhe3072" 00:20:02.019 } 00:20:02.019 } 00:20:02.019 ]' 00:20:02.019 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.019 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.019 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.019 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.020 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.020 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.020 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.020 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.279 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:20:02.279 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:20:02.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:02.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:02.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:02.850 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.850 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.850 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.110 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.371 00:20:03.371 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.371 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.371 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.631 { 00:20:03.631 "cntlid": 119, 00:20:03.631 "qid": 0, 00:20:03.631 "state": "enabled", 00:20:03.631 "thread": "nvmf_tgt_poll_group_000", 00:20:03.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:03.631 "listen_address": { 00:20:03.631 "trtype": "TCP", 00:20:03.631 "adrfam": "IPv4", 00:20:03.631 "traddr": "10.0.0.2", 00:20:03.631 "trsvcid": "4420" 00:20:03.631 }, 00:20:03.631 "peer_address": { 00:20:03.631 "trtype": "TCP", 00:20:03.631 "adrfam": "IPv4", 00:20:03.631 "traddr": "10.0.0.1", 00:20:03.631 "trsvcid": "34404" 00:20:03.631 }, 00:20:03.631 "auth": { 00:20:03.631 "state": "completed", 00:20:03.631 "digest": "sha512", 00:20:03.631 "dhgroup": "ffdhe3072" 00:20:03.631 } 00:20:03.631 } 00:20:03.631 ]' 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.631 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.892 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:03.892 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:04.463 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.463 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:04.463 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.463 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.463 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.463 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.463 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.463 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.463 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.724 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.984 00:20:04.984 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.984 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.984 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.245 { 00:20:05.245 "cntlid": 121, 00:20:05.245 "qid": 0, 00:20:05.245 "state": "enabled", 00:20:05.245 "thread": "nvmf_tgt_poll_group_000", 00:20:05.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:05.245 "listen_address": { 00:20:05.245 "trtype": "TCP", 00:20:05.245 "adrfam": "IPv4", 00:20:05.245 "traddr": "10.0.0.2", 00:20:05.245 "trsvcid": "4420" 00:20:05.245 }, 00:20:05.245 "peer_address": { 00:20:05.245 "trtype": "TCP", 00:20:05.245 "adrfam": "IPv4", 00:20:05.245 "traddr": "10.0.0.1", 00:20:05.245 "trsvcid": "34426" 00:20:05.245 }, 00:20:05.245 "auth": { 00:20:05.245 "state": "completed", 00:20:05.245 "digest": "sha512", 00:20:05.245 "dhgroup": "ffdhe4096" 00:20:05.245 } 00:20:05.245 } 00:20:05.245 ]' 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.245 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.505 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:20:05.505 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:20:06.075 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.075 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:06.075 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.075 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.335 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.595 00:20:06.595 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.595 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.595 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.855 { 00:20:06.855 "cntlid": 123, 00:20:06.855 "qid": 0, 00:20:06.855 "state": "enabled", 00:20:06.855 "thread": "nvmf_tgt_poll_group_000", 00:20:06.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:06.855 "listen_address": { 00:20:06.855 "trtype": "TCP", 00:20:06.855 "adrfam": "IPv4", 00:20:06.855 "traddr": "10.0.0.2", 00:20:06.855 "trsvcid": "4420" 00:20:06.855 }, 00:20:06.855 "peer_address": { 00:20:06.855 "trtype": "TCP", 00:20:06.855 "adrfam": "IPv4", 00:20:06.855 "traddr": "10.0.0.1", 00:20:06.855 "trsvcid": "40434" 00:20:06.855 }, 00:20:06.855 "auth": { 00:20:06.855 "state": "completed", 00:20:06.855 "digest": "sha512", 00:20:06.855 "dhgroup": "ffdhe4096" 00:20:06.855 } 00:20:06.855 } 00:20:06.855 ]' 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.855 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.116 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:20:07.116 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:20:07.685 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.685 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:07.685 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.685 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.685 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.685 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.685 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:07.685 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.944 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.204 00:20:08.204 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.204 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.204 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.463 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.463 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.463 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:08.463 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.463 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:08.463 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.463 { 00:20:08.463 "cntlid": 125, 00:20:08.463 "qid": 0, 00:20:08.463 "state": "enabled", 00:20:08.463 "thread": "nvmf_tgt_poll_group_000", 00:20:08.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:08.463 "listen_address": { 00:20:08.463 "trtype": "TCP", 00:20:08.463 "adrfam": "IPv4", 00:20:08.463 "traddr": "10.0.0.2", 00:20:08.463 "trsvcid": "4420" 00:20:08.463 }, 00:20:08.463 "peer_address": { 00:20:08.463 "trtype": "TCP", 00:20:08.463 "adrfam": "IPv4", 00:20:08.463 "traddr": "10.0.0.1", 00:20:08.463 "trsvcid": "40470" 00:20:08.463 }, 00:20:08.463 "auth": { 00:20:08.463 "state": "completed", 00:20:08.463 "digest": "sha512", 00:20:08.463 "dhgroup": "ffdhe4096" 00:20:08.463 } 00:20:08.463 } 00:20:08.463 ]' 00:20:08.463 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.463 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.463 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.463 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.463 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.724 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.724 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.724 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.724 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:20:08.724 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:20:09.664 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.664 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:09.664 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:09.664 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.664 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.924 00:20:09.924 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.924 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.924 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.184 { 00:20:10.184 "cntlid": 127, 00:20:10.184 "qid": 0, 00:20:10.184 "state": "enabled", 00:20:10.184 "thread": "nvmf_tgt_poll_group_000", 00:20:10.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:10.184 "listen_address": { 00:20:10.184 "trtype": "TCP", 00:20:10.184 "adrfam": "IPv4", 00:20:10.184 "traddr": "10.0.0.2", 00:20:10.184 "trsvcid": "4420" 00:20:10.184 }, 00:20:10.184 "peer_address": { 00:20:10.184 "trtype": "TCP", 00:20:10.184 "adrfam": "IPv4", 00:20:10.184 "traddr": "10.0.0.1", 00:20:10.184 "trsvcid": "40482" 00:20:10.184 }, 00:20:10.184 "auth": { 00:20:10.184 "state": "completed", 00:20:10.184 "digest": "sha512", 00:20:10.184 "dhgroup": "ffdhe4096" 00:20:10.184 } 00:20:10.184 } 00:20:10.184 ]' 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.184 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.445 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:10.445 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:11.078 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.078 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:11.078 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.078 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.078 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.078 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.078 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.078 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.078 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.395 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.655 00:20:11.655 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.655 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.655 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.915 { 00:20:11.915 "cntlid": 129, 00:20:11.915 "qid": 0, 00:20:11.915 "state": "enabled", 00:20:11.915 "thread": "nvmf_tgt_poll_group_000", 00:20:11.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:11.915 "listen_address": { 00:20:11.915 "trtype": "TCP", 00:20:11.915 "adrfam": "IPv4", 00:20:11.915 "traddr": "10.0.0.2", 00:20:11.915 "trsvcid": "4420" 00:20:11.915 }, 00:20:11.915 "peer_address": { 00:20:11.915 "trtype": "TCP", 00:20:11.915 "adrfam": "IPv4", 00:20:11.915 "traddr": "10.0.0.1", 00:20:11.915 "trsvcid": "40492" 00:20:11.915 }, 00:20:11.915 "auth": { 00:20:11.915 "state": "completed", 00:20:11.915 "digest": "sha512", 00:20:11.915 "dhgroup": "ffdhe6144" 00:20:11.915 } 00:20:11.915 } 00:20:11.915 ]' 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.915 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.175 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:20:12.175 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:20:12.744 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.005 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.266 00:20:13.526 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.526 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.526 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.526 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.526 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.526 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:13.526 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.526 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:13.526 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.526 { 00:20:13.526 "cntlid": 131, 00:20:13.526 "qid": 0, 00:20:13.526 "state": "enabled", 00:20:13.526 "thread": "nvmf_tgt_poll_group_000", 00:20:13.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:13.526 "listen_address": { 00:20:13.526 "trtype": "TCP", 00:20:13.526 "adrfam": "IPv4", 00:20:13.526 "traddr": "10.0.0.2", 00:20:13.526 "trsvcid": "4420" 00:20:13.526 }, 00:20:13.526 "peer_address": { 00:20:13.526 "trtype": "TCP", 00:20:13.526 "adrfam": "IPv4", 00:20:13.526 "traddr": "10.0.0.1", 00:20:13.527 "trsvcid": "40514" 00:20:13.527 }, 00:20:13.527 "auth": { 00:20:13.527 "state": "completed", 00:20:13.527 "digest": "sha512", 00:20:13.527 "dhgroup": "ffdhe6144" 00:20:13.527 } 00:20:13.527 } 00:20:13.527 ]' 00:20:13.527 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.527 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.527 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.787 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:13.788 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.788 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.788 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.788 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.047 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:20:14.047 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:20:14.616 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.616 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:14.616 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:14.616 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.616 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:14.616 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.616 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.616 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.877 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.138 00:20:15.138 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.138 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.138 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.398 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.398 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.398 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:15.398 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.398 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:15.398 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.398 { 00:20:15.398 "cntlid": 133, 00:20:15.398 "qid": 0, 00:20:15.398 "state": "enabled", 00:20:15.398 "thread": "nvmf_tgt_poll_group_000", 00:20:15.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:15.398 "listen_address": { 00:20:15.398 "trtype": "TCP", 00:20:15.398 "adrfam": "IPv4", 00:20:15.398 "traddr": "10.0.0.2", 00:20:15.398 "trsvcid": "4420" 00:20:15.398 }, 00:20:15.398 "peer_address": { 00:20:15.398 "trtype": "TCP", 00:20:15.398 "adrfam": "IPv4", 00:20:15.398 "traddr": "10.0.0.1", 00:20:15.398 "trsvcid": "40534" 00:20:15.398 }, 00:20:15.398 "auth": { 00:20:15.398 "state": "completed", 00:20:15.398 "digest": "sha512", 00:20:15.398 "dhgroup": "ffdhe6144" 00:20:15.398 } 00:20:15.398 } 00:20:15.398 ]' 00:20:15.398 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.399 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.399 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.399 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.399 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.399 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.399 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.399 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.659 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:20:15.659 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:20:16.233 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.233 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:16.233 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.233 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.233 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.233 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.233 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:16.233 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.494 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.755 00:20:16.755 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.755 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.755 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.016 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.016 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.016 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:17.016 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.016 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:17.016 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.016 { 00:20:17.016 "cntlid": 135, 00:20:17.016 "qid": 0, 00:20:17.016 "state": "enabled", 00:20:17.016 "thread": "nvmf_tgt_poll_group_000", 00:20:17.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:17.016 "listen_address": { 00:20:17.016 "trtype": "TCP", 00:20:17.016 "adrfam": "IPv4", 00:20:17.016 "traddr": "10.0.0.2", 00:20:17.016 "trsvcid": "4420" 00:20:17.016 }, 00:20:17.016 "peer_address": { 00:20:17.016 "trtype": "TCP", 00:20:17.016 "adrfam": "IPv4", 00:20:17.016 "traddr": "10.0.0.1", 00:20:17.016 "trsvcid": "57114" 00:20:17.016 }, 00:20:17.016 "auth": { 00:20:17.016 "state": "completed", 00:20:17.016 "digest": "sha512", 00:20:17.016 "dhgroup": "ffdhe6144" 00:20:17.016 } 00:20:17.016 } 00:20:17.016 ]' 00:20:17.016 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.016 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.016 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.277 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.277 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.277 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.277 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.277 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.277 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:17.277 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:18.219 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.220 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.220 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.792 00:20:18.792 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.792 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.792 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.053 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.053 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.053 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:19.053 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.053 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:19.053 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.053 { 00:20:19.053 "cntlid": 137, 00:20:19.053 "qid": 0, 00:20:19.053 "state": "enabled", 00:20:19.053 "thread": "nvmf_tgt_poll_group_000", 00:20:19.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:19.053 "listen_address": { 00:20:19.053 "trtype": "TCP", 00:20:19.053 "adrfam": "IPv4", 00:20:19.054 "traddr": "10.0.0.2", 00:20:19.054 "trsvcid": "4420" 00:20:19.054 }, 00:20:19.054 "peer_address": { 00:20:19.054 "trtype": "TCP", 00:20:19.054 "adrfam": "IPv4", 00:20:19.054 "traddr": "10.0.0.1", 00:20:19.054 "trsvcid": "57140" 00:20:19.054 }, 00:20:19.054 "auth": { 00:20:19.054 "state": "completed", 00:20:19.054 "digest": "sha512", 00:20:19.054 "dhgroup": "ffdhe8192" 00:20:19.054 } 00:20:19.054 } 00:20:19.054 ]' 00:20:19.054 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.054 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.054 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.054 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.054 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.054 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.054 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.054 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.313 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:20:19.313 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:20:19.883 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.883 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:19.883 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:19.883 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.883 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:19.883 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.883 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:19.883 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.143 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.710 00:20:20.710 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.710 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.710 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.710 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.710 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.710 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:20.710 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.710 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:20.710 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.710 { 00:20:20.710 "cntlid": 139, 00:20:20.710 "qid": 0, 00:20:20.710 "state": "enabled", 00:20:20.710 "thread": "nvmf_tgt_poll_group_000", 00:20:20.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:20.710 "listen_address": { 00:20:20.710 "trtype": "TCP", 00:20:20.710 "adrfam": "IPv4", 00:20:20.710 "traddr": "10.0.0.2", 00:20:20.710 "trsvcid": "4420" 00:20:20.710 }, 00:20:20.710 "peer_address": { 00:20:20.710 "trtype": "TCP", 00:20:20.710 "adrfam": "IPv4", 00:20:20.710 "traddr": "10.0.0.1", 00:20:20.710 "trsvcid": "57158" 00:20:20.710 }, 00:20:20.710 "auth": { 00:20:20.710 "state": "completed", 00:20:20.710 "digest": "sha512", 00:20:20.710 "dhgroup": "ffdhe8192" 00:20:20.710 } 00:20:20.710 } 00:20:20.710 ]' 00:20:20.710 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.970 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.970 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.970 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.970 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.970 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.970 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.970 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.230 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:20:21.230 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: --dhchap-ctrl-secret DHHC-1:02:YmVmZGI2N2RjNGM3MWE5ZDNhZGEwYmVjMmYyYzI5NTdjYmFlNmRhMjg0ODAwNDRiSrsbUA==: 00:20:21.799 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.799 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:21.799 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.799 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.799 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.799 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.799 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.799 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.062 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.633 00:20:22.633 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.633 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.633 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.633 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.633 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.633 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:22.633 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.633 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:22.633 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.633 { 00:20:22.633 "cntlid": 141, 00:20:22.633 "qid": 0, 00:20:22.633 "state": "enabled", 00:20:22.633 "thread": "nvmf_tgt_poll_group_000", 00:20:22.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:22.633 "listen_address": { 00:20:22.633 "trtype": "TCP", 00:20:22.633 "adrfam": "IPv4", 00:20:22.633 "traddr": "10.0.0.2", 00:20:22.633 "trsvcid": "4420" 00:20:22.633 }, 00:20:22.633 "peer_address": { 00:20:22.633 "trtype": "TCP", 00:20:22.633 "adrfam": "IPv4", 00:20:22.633 "traddr": "10.0.0.1", 00:20:22.633 "trsvcid": "57172" 00:20:22.633 }, 00:20:22.633 "auth": { 00:20:22.633 "state": "completed", 00:20:22.633 "digest": "sha512", 00:20:22.633 "dhgroup": "ffdhe8192" 00:20:22.633 } 00:20:22.633 } 00:20:22.633 ]' 00:20:22.633 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.893 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.894 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.894 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.894 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.894 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.894 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.894 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.154 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:20:23.154 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:01:M2VkOTU4YzI0Y2QyMWU5OTExZWViZmM3NGY4OTIxMjm/ivBv: 00:20:23.725 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.725 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:23.725 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:23.725 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.725 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:23.725 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.725 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.725 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.986 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.246 00:20:24.246 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.246 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.246 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.507 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.507 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.507 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:24.507 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.507 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:24.507 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.507 { 00:20:24.507 "cntlid": 143, 00:20:24.507 "qid": 0, 00:20:24.507 "state": "enabled", 00:20:24.507 "thread": "nvmf_tgt_poll_group_000", 00:20:24.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:24.507 "listen_address": { 00:20:24.507 "trtype": "TCP", 00:20:24.507 "adrfam": "IPv4", 00:20:24.507 "traddr": "10.0.0.2", 00:20:24.507 "trsvcid": "4420" 00:20:24.507 }, 00:20:24.507 "peer_address": { 00:20:24.507 "trtype": "TCP", 00:20:24.507 "adrfam": "IPv4", 00:20:24.507 "traddr": "10.0.0.1", 00:20:24.507 "trsvcid": "57200" 00:20:24.507 }, 00:20:24.507 "auth": { 00:20:24.507 "state": "completed", 00:20:24.507 "digest": "sha512", 00:20:24.507 "dhgroup": "ffdhe8192" 00:20:24.507 } 00:20:24.507 } 00:20:24.507 ]' 00:20:24.507 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.507 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.507 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.768 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.768 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.768 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.768 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.768 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.768 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:24.768 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.708 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.278 00:20:26.278 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.278 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.278 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.278 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.278 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.278 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.278 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.278 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.278 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.278 { 00:20:26.278 "cntlid": 145, 00:20:26.278 "qid": 0, 00:20:26.278 "state": "enabled", 00:20:26.278 "thread": "nvmf_tgt_poll_group_000", 00:20:26.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:26.278 "listen_address": { 00:20:26.278 "trtype": "TCP", 00:20:26.278 "adrfam": "IPv4", 00:20:26.278 "traddr": "10.0.0.2", 00:20:26.278 "trsvcid": "4420" 00:20:26.278 }, 00:20:26.278 "peer_address": { 00:20:26.278 "trtype": "TCP", 00:20:26.278 "adrfam": "IPv4", 00:20:26.278 "traddr": "10.0.0.1", 00:20:26.278 "trsvcid": "39290" 00:20:26.278 }, 00:20:26.278 "auth": { 00:20:26.278 "state": "completed", 00:20:26.278 "digest": "sha512", 00:20:26.278 "dhgroup": "ffdhe8192" 00:20:26.278 } 00:20:26.278 } 00:20:26.278 ]' 00:20:26.539 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.539 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.539 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.539 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.539 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.539 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.539 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.539 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.800 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:20:26.800 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:NDAyYmVkODkwYjhjMWEwYTM2ZDE1MjJiOThhZTljYzRhZTQzYTQ1ZGNjY2FkZWQ1phbfRw==: --dhchap-ctrl-secret DHHC-1:03:NDY1OTRjNjQ0YmY4MWFjMzA4ZmQxZGI2OGY3ZTgwM2U3ZDAzYTdlMjFlZTk5NTU4ZTI0NTJlNTY5MzliMmZlOO3DGTk=: 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # local es=0 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@641 -- # local arg=bdev_connect 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # type -t bdev_connect 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:27.371 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:27.942 request: 00:20:27.942 { 00:20:27.942 "name": "nvme0", 00:20:27.942 "trtype": "tcp", 00:20:27.942 "traddr": "10.0.0.2", 00:20:27.942 "adrfam": "ipv4", 00:20:27.942 "trsvcid": "4420", 00:20:27.942 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:27.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:27.942 "prchk_reftag": false, 00:20:27.942 "prchk_guard": false, 00:20:27.942 "hdgst": false, 00:20:27.942 "ddgst": false, 00:20:27.942 "dhchap_key": "key2", 00:20:27.942 "allow_unrecognized_csi": false, 00:20:27.942 "method": "bdev_nvme_attach_controller", 00:20:27.942 "req_id": 1 00:20:27.942 } 00:20:27.942 Got JSON-RPC error response 00:20:27.942 response: 00:20:27.942 { 00:20:27.942 "code": -5, 00:20:27.942 "message": "Input/output error" 00:20:27.942 } 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # es=1 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # local es=0 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@641 -- # local arg=bdev_connect 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # type -t bdev_connect 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.942 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:28.203 request: 00:20:28.203 { 00:20:28.203 "name": "nvme0", 00:20:28.203 "trtype": "tcp", 00:20:28.203 "traddr": "10.0.0.2", 00:20:28.203 "adrfam": "ipv4", 00:20:28.203 "trsvcid": "4420", 00:20:28.203 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:28.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:28.203 "prchk_reftag": false, 00:20:28.203 "prchk_guard": false, 00:20:28.203 "hdgst": false, 00:20:28.203 "ddgst": false, 00:20:28.203 "dhchap_key": "key1", 00:20:28.203 "dhchap_ctrlr_key": "ckey2", 00:20:28.203 "allow_unrecognized_csi": false, 00:20:28.203 "method": "bdev_nvme_attach_controller", 00:20:28.203 "req_id": 1 00:20:28.203 } 00:20:28.203 Got JSON-RPC error response 00:20:28.203 response: 00:20:28.203 { 00:20:28.203 "code": -5, 00:20:28.203 "message": "Input/output error" 00:20:28.203 } 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # es=1 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:28.203 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # local es=0 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@641 -- # local arg=bdev_connect 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # type -t bdev_connect 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.463 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.723 request: 00:20:28.723 { 00:20:28.723 "name": "nvme0", 00:20:28.723 "trtype": "tcp", 00:20:28.723 "traddr": "10.0.0.2", 00:20:28.723 "adrfam": "ipv4", 00:20:28.723 "trsvcid": "4420", 00:20:28.723 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:28.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:28.723 "prchk_reftag": false, 00:20:28.723 "prchk_guard": false, 00:20:28.723 "hdgst": false, 00:20:28.723 "ddgst": false, 00:20:28.723 "dhchap_key": "key1", 00:20:28.723 "dhchap_ctrlr_key": "ckey1", 00:20:28.723 "allow_unrecognized_csi": false, 00:20:28.723 "method": "bdev_nvme_attach_controller", 00:20:28.723 "req_id": 1 00:20:28.723 } 00:20:28.723 Got JSON-RPC error response 00:20:28.723 response: 00:20:28.723 { 00:20:28.723 "code": -5, 00:20:28.723 "message": "Input/output error" 00:20:28.723 } 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # es=1 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3335797 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' -z 3335797 ']' 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # kill -0 3335797 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # uname 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:28.723 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3335797 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3335797' 00:20:28.983 killing process with pid 3335797 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # kill 3335797 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@977 -- # wait 3335797 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3361740 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3361740 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:28.983 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # '[' -z 3361740 ']' 00:20:28.984 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.984 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:28.984 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.984 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:28.984 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@867 -- # return 0 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@733 -- # xtrace_disable 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3361740 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # '[' -z 3361740 ']' 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:29.924 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.184 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@867 -- # return 0 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.185 null0 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.We2 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.OOd ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OOd 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.aTY 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.npl ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.npl 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.x2b 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ipF ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ipF 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Qzq 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.185 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.127 nvme0n1 00:20:31.127 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.127 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.127 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.127 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.127 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.127 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:31.127 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.127 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:31.127 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.127 { 00:20:31.127 "cntlid": 1, 00:20:31.127 "qid": 0, 00:20:31.127 "state": "enabled", 00:20:31.127 "thread": "nvmf_tgt_poll_group_000", 00:20:31.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:31.127 "listen_address": { 00:20:31.127 "trtype": "TCP", 00:20:31.127 "adrfam": "IPv4", 00:20:31.127 "traddr": "10.0.0.2", 00:20:31.127 "trsvcid": "4420" 00:20:31.127 }, 00:20:31.127 "peer_address": { 00:20:31.127 "trtype": "TCP", 00:20:31.127 "adrfam": "IPv4", 00:20:31.127 "traddr": "10.0.0.1", 00:20:31.127 "trsvcid": "39348" 00:20:31.127 }, 00:20:31.127 "auth": { 00:20:31.127 "state": "completed", 00:20:31.127 "digest": "sha512", 00:20:31.127 "dhgroup": "ffdhe8192" 00:20:31.127 } 00:20:31.127 } 00:20:31.127 ]' 00:20:31.127 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.388 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.388 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.388 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.388 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.388 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.388 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.388 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.649 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:31.649 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:32.219 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:32.479 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:32.479 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # local es=0 00:20:32.479 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:32.479 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@641 -- # local arg=bdev_connect 00:20:32.479 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:32.479 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # type -t bdev_connect 00:20:32.479 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:32.479 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.479 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.479 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.479 request: 00:20:32.479 { 00:20:32.479 "name": "nvme0", 00:20:32.479 "trtype": "tcp", 00:20:32.479 "traddr": "10.0.0.2", 00:20:32.479 "adrfam": "ipv4", 00:20:32.479 "trsvcid": "4420", 00:20:32.479 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:32.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:32.479 "prchk_reftag": false, 00:20:32.479 "prchk_guard": false, 00:20:32.479 "hdgst": false, 00:20:32.479 "ddgst": false, 00:20:32.479 "dhchap_key": "key3", 00:20:32.479 "allow_unrecognized_csi": false, 00:20:32.479 "method": "bdev_nvme_attach_controller", 00:20:32.479 "req_id": 1 00:20:32.479 } 00:20:32.479 Got JSON-RPC error response 00:20:32.479 response: 00:20:32.479 { 00:20:32.479 "code": -5, 00:20:32.479 "message": "Input/output error" 00:20:32.479 } 00:20:32.479 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # es=1 00:20:32.479 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:32.479 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:32.479 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:32.479 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:32.479 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:32.479 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:32.479 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:32.739 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:32.739 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # local es=0 00:20:32.739 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:32.739 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@641 -- # local arg=bdev_connect 00:20:32.739 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:32.739 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # type -t bdev_connect 00:20:32.739 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:32.739 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.739 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.739 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.998 request: 00:20:32.998 { 00:20:32.998 "name": "nvme0", 00:20:32.998 "trtype": "tcp", 00:20:32.998 "traddr": "10.0.0.2", 00:20:32.998 "adrfam": "ipv4", 00:20:32.998 "trsvcid": "4420", 00:20:32.998 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:32.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:32.998 "prchk_reftag": false, 00:20:32.998 "prchk_guard": false, 00:20:32.998 "hdgst": false, 00:20:32.998 "ddgst": false, 00:20:32.998 "dhchap_key": "key3", 00:20:32.998 "allow_unrecognized_csi": false, 00:20:32.998 "method": "bdev_nvme_attach_controller", 00:20:32.998 "req_id": 1 00:20:32.998 } 00:20:32.998 Got JSON-RPC error response 00:20:32.998 response: 00:20:32.998 { 00:20:32.998 "code": -5, 00:20:32.998 "message": "Input/output error" 00:20:32.998 } 00:20:32.998 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # es=1 00:20:32.998 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:32.998 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:32.998 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:32.998 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:32.998 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:32.998 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:32.998 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:32.998 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:32.998 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # local es=0 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@641 -- # local arg=bdev_connect 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # type -t bdev_connect 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:33.258 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:33.518 request: 00:20:33.518 { 00:20:33.518 "name": "nvme0", 00:20:33.518 "trtype": "tcp", 00:20:33.518 "traddr": "10.0.0.2", 00:20:33.518 "adrfam": "ipv4", 00:20:33.518 "trsvcid": "4420", 00:20:33.518 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:33.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:33.518 "prchk_reftag": false, 00:20:33.518 "prchk_guard": false, 00:20:33.518 "hdgst": false, 00:20:33.518 "ddgst": false, 00:20:33.518 "dhchap_key": "key0", 00:20:33.518 "dhchap_ctrlr_key": "key1", 00:20:33.518 "allow_unrecognized_csi": false, 00:20:33.518 "method": "bdev_nvme_attach_controller", 00:20:33.518 "req_id": 1 00:20:33.518 } 00:20:33.518 Got JSON-RPC error response 00:20:33.518 response: 00:20:33.518 { 00:20:33.518 "code": -5, 00:20:33.518 "message": "Input/output error" 00:20:33.518 } 00:20:33.518 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # es=1 00:20:33.518 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:33.518 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:33.518 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:33.518 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:33.518 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:33.518 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:33.777 nvme0n1 00:20:33.777 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:33.777 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:33.777 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.037 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.037 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.037 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.037 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:20:34.037 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.037 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.037 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.037 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:34.037 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:34.037 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:34.978 nvme0n1 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:34.978 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.237 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.238 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:35.238 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: --dhchap-ctrl-secret DHHC-1:03:OGU2MzZmZDdiZDdlOTE2YWZhN2NlYWQ3NTc4MTgxOTg5MTJhYWNlYjE5YTBkNTRkMGU5OTYzZDZiN2MxZTlmN4Hde1k=: 00:20:35.807 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:35.807 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:35.807 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:35.807 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:35.807 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:35.807 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:35.807 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:35.808 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.808 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.069 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:36.069 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # local es=0 00:20:36.069 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:36.069 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@641 -- # local arg=bdev_connect 00:20:36.069 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:36.069 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # type -t bdev_connect 00:20:36.069 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:36.069 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:36.069 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:36.069 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:36.639 request: 00:20:36.639 { 00:20:36.639 "name": "nvme0", 00:20:36.639 "trtype": "tcp", 00:20:36.639 "traddr": "10.0.0.2", 00:20:36.639 "adrfam": "ipv4", 00:20:36.639 "trsvcid": "4420", 00:20:36.639 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:36.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:36.639 "prchk_reftag": false, 00:20:36.639 "prchk_guard": false, 00:20:36.639 "hdgst": false, 00:20:36.639 "ddgst": false, 00:20:36.639 "dhchap_key": "key1", 00:20:36.639 "allow_unrecognized_csi": false, 00:20:36.639 "method": "bdev_nvme_attach_controller", 00:20:36.639 "req_id": 1 00:20:36.639 } 00:20:36.639 Got JSON-RPC error response 00:20:36.639 response: 00:20:36.639 { 00:20:36.639 "code": -5, 00:20:36.639 "message": "Input/output error" 00:20:36.639 } 00:20:36.639 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # es=1 00:20:36.639 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:36.639 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:36.639 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:36.639 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:36.639 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:36.639 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:37.210 nvme0n1 00:20:37.210 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:37.210 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:37.210 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.471 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.471 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.471 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.732 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:37.732 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:37.732 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.732 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:37.732 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:37.732 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:37.732 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:37.993 nvme0n1 00:20:37.993 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:37.993 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:37.993 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.993 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.993 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.993 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: '' 2s 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: ]] 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDUzZWEwY2NkNDllZDc5ZjNhYWNmM2FhMmNlNDEwNjkNLhU2: 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:38.254 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:40.165 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:40.165 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # local i=0 00:20:40.165 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # lsblk -l -o NAME 00:20:40.165 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # grep -q -w nvme0n1 00:20:40.165 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1245 -- # lsblk -l -o NAME 00:20:40.165 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1245 -- # grep -q -w nvme0n1 00:20:40.165 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1249 -- # return 0 00:20:40.165 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:40.165 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:40.165 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: 2s 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: ]] 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDI3YWZjZDliM2QwMWNlYzU5OGRlOTY2MjJkZDBkNDkzNzQxNzQxNjYyMmEwNDY2+tfN9A==: 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:40.425 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # local i=0 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # lsblk -l -o NAME 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # grep -q -w nvme0n1 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1245 -- # lsblk -l -o NAME 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1245 -- # grep -q -w nvme0n1 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1249 -- # return 0 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:42.335 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:43.274 nvme0n1 00:20:43.274 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:43.274 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:43.274 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.274 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:43.274 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:43.274 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:43.534 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:43.534 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:43.534 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.793 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.793 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:43.793 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:43.793 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.793 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:43.793 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:43.793 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # local es=0 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@641 -- # local arg=hostrpc 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # type -t hostrpc 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:44.054 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:44.055 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:44.625 request: 00:20:44.625 { 00:20:44.625 "name": "nvme0", 00:20:44.625 "dhchap_key": "key1", 00:20:44.625 "dhchap_ctrlr_key": "key3", 00:20:44.625 "method": "bdev_nvme_set_keys", 00:20:44.625 "req_id": 1 00:20:44.625 } 00:20:44.625 Got JSON-RPC error response 00:20:44.625 response: 00:20:44.625 { 00:20:44.625 "code": -13, 00:20:44.625 "message": "Permission denied" 00:20:44.625 } 00:20:44.625 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # es=1 00:20:44.625 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:44.625 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:44.625 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:44.625 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:44.625 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:44.625 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.886 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:44.886 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:45.826 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:45.826 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:45.826 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.826 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:45.827 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:45.827 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:45.827 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.827 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:45.827 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:45.827 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:45.827 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:46.767 nvme0n1 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # local es=0 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@641 -- # local arg=hostrpc 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # type -t hostrpc 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:46.767 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:47.027 request: 00:20:47.027 { 00:20:47.027 "name": "nvme0", 00:20:47.027 "dhchap_key": "key2", 00:20:47.027 "dhchap_ctrlr_key": "key0", 00:20:47.027 "method": "bdev_nvme_set_keys", 00:20:47.027 "req_id": 1 00:20:47.027 } 00:20:47.027 Got JSON-RPC error response 00:20:47.027 response: 00:20:47.027 { 00:20:47.027 "code": -13, 00:20:47.027 "message": "Permission denied" 00:20:47.027 } 00:20:47.287 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@656 -- # es=1 00:20:47.287 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:47.287 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:47.287 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:47.287 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:47.287 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:47.287 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.287 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:47.287 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:48.227 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:48.227 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:48.227 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3336023 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' -z 3336023 ']' 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # kill -0 3336023 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # uname 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3336023 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3336023' 00:20:48.487 killing process with pid 3336023 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # kill 3336023 00:20:48.487 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@977 -- # wait 3336023 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.747 rmmod nvme_tcp 00:20:48.747 rmmod nvme_fabrics 00:20:48.747 rmmod nvme_keyring 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 3361740 ']' 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 3361740 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' -z 3361740 ']' 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # kill -0 3361740 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # uname 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:48.747 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3361740 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3361740' 00:20:49.008 killing process with pid 3361740 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # kill 3361740 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@977 -- # wait 3361740 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.008 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.080 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:51.080 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.We2 /tmp/spdk.key-sha256.aTY /tmp/spdk.key-sha384.x2b /tmp/spdk.key-sha512.Qzq /tmp/spdk.key-sha512.OOd /tmp/spdk.key-sha384.npl /tmp/spdk.key-sha256.ipF '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:51.080 00:20:51.080 real 2m37.328s 00:20:51.080 user 5m53.677s 00:20:51.080 sys 0m24.743s 00:20:51.080 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # xtrace_disable 00:20:51.080 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.080 ************************************ 00:20:51.080 END TEST nvmf_auth_target 00:20:51.080 ************************************ 00:20:51.080 09:41:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:51.080 09:41:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:51.080 09:41:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:20:51.080 09:41:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:20:51.080 09:41:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:51.080 ************************************ 00:20:51.080 START TEST nvmf_bdevio_no_huge 00:20:51.080 ************************************ 00:20:51.080 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:51.341 * Looking for test storage... 00:20:51.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1626 -- # lcov --version 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:51.341 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:20:51.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.342 --rc genhtml_branch_coverage=1 00:20:51.342 --rc genhtml_function_coverage=1 00:20:51.342 --rc genhtml_legend=1 00:20:51.342 --rc geninfo_all_blocks=1 00:20:51.342 --rc geninfo_unexecuted_blocks=1 00:20:51.342 00:20:51.342 ' 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:20:51.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.342 --rc genhtml_branch_coverage=1 00:20:51.342 --rc genhtml_function_coverage=1 00:20:51.342 --rc genhtml_legend=1 00:20:51.342 --rc geninfo_all_blocks=1 00:20:51.342 --rc geninfo_unexecuted_blocks=1 00:20:51.342 00:20:51.342 ' 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:20:51.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.342 --rc genhtml_branch_coverage=1 00:20:51.342 --rc genhtml_function_coverage=1 00:20:51.342 --rc genhtml_legend=1 00:20:51.342 --rc geninfo_all_blocks=1 00:20:51.342 --rc geninfo_unexecuted_blocks=1 00:20:51.342 00:20:51.342 ' 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:20:51.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.342 --rc genhtml_branch_coverage=1 00:20:51.342 --rc genhtml_function_coverage=1 00:20:51.342 --rc genhtml_legend=1 00:20:51.342 --rc geninfo_all_blocks=1 00:20:51.342 --rc geninfo_unexecuted_blocks=1 00:20:51.342 00:20:51.342 ' 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.342 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.343 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.604 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:51.604 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:51.604 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.604 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.742 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:59.743 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:59.743 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:59.743 Found net devices under 0000:31:00.0: cvl_0_0 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:59.743 Found net devices under 0000:31:00.1: cvl_0_1 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:59.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:20:59.743 00:20:59.743 --- 10.0.0.2 ping statistics --- 00:20:59.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.743 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:20:59.743 00:20:59.743 --- 10.0.0.1 ping statistics --- 00:20:59.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.743 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=3370057 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 3370057 00:20:59.743 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:59.744 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # '[' -z 3370057 ']' 00:20:59.744 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.744 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:59.744 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.744 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:59.744 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:59.744 [2024-10-07 09:41:58.802391] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:20:59.744 [2024-10-07 09:41:58.802467] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:59.744 [2024-10-07 09:41:58.900384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.744 [2024-10-07 09:41:59.005229] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.744 [2024-10-07 09:41:59.005278] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.744 [2024-10-07 09:41:59.005290] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.744 [2024-10-07 09:41:59.005297] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.744 [2024-10-07 09:41:59.005304] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.744 [2024-10-07 09:41:59.006810] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:20:59.744 [2024-10-07 09:41:59.007072] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:20:59.744 [2024-10-07 09:41:59.007232] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:20:59.744 [2024-10-07 09:41:59.007233] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.004 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:00.004 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@867 -- # return 0 00:21:00.004 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:00.004 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@733 -- # xtrace_disable 00:21:00.004 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:00.264 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.264 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.264 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.264 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:00.264 [2024-10-07 09:41:59.688721] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.264 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.264 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:00.265 Malloc0 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:00.265 [2024-10-07 09:41:59.742565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:00.265 { 00:21:00.265 "params": { 00:21:00.265 "name": "Nvme$subsystem", 00:21:00.265 "trtype": "$TEST_TRANSPORT", 00:21:00.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.265 "adrfam": "ipv4", 00:21:00.265 "trsvcid": "$NVMF_PORT", 00:21:00.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.265 "hdgst": ${hdgst:-false}, 00:21:00.265 "ddgst": ${ddgst:-false} 00:21:00.265 }, 00:21:00.265 "method": "bdev_nvme_attach_controller" 00:21:00.265 } 00:21:00.265 EOF 00:21:00.265 )") 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:21:00.265 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:00.265 "params": { 00:21:00.265 "name": "Nvme1", 00:21:00.265 "trtype": "tcp", 00:21:00.265 "traddr": "10.0.0.2", 00:21:00.265 "adrfam": "ipv4", 00:21:00.265 "trsvcid": "4420", 00:21:00.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.265 "hdgst": false, 00:21:00.265 "ddgst": false 00:21:00.265 }, 00:21:00.265 "method": "bdev_nvme_attach_controller" 00:21:00.265 }' 00:21:00.265 [2024-10-07 09:41:59.800530] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:00.265 [2024-10-07 09:41:59.800604] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3370138 ] 00:21:00.265 [2024-10-07 09:41:59.888778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:00.525 [2024-10-07 09:41:59.996488] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.525 [2024-10-07 09:41:59.996628] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.525 [2024-10-07 09:41:59.996641] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.525 I/O targets: 00:21:00.525 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:00.525 00:21:00.525 00:21:00.525 CUnit - A unit testing framework for C - Version 2.1-3 00:21:00.525 http://cunit.sourceforge.net/ 00:21:00.525 00:21:00.525 00:21:00.525 Suite: bdevio tests on: Nvme1n1 00:21:00.786 Test: blockdev write read block ...passed 00:21:00.786 Test: blockdev write zeroes read block ...passed 00:21:00.786 Test: blockdev write zeroes read no split ...passed 00:21:00.786 Test: blockdev write zeroes read split ...passed 00:21:00.786 Test: blockdev write zeroes read split partial ...passed 00:21:00.786 Test: blockdev reset ...[2024-10-07 09:42:00.403264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.786 [2024-10-07 09:42:00.403382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a1250 (9): Bad file descriptor 00:21:00.786 [2024-10-07 09:42:00.422204] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:00.786 passed 00:21:00.786 Test: blockdev write read 8 blocks ...passed 00:21:00.786 Test: blockdev write read size > 128k ...passed 00:21:00.786 Test: blockdev write read invalid size ...passed 00:21:01.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:01.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:01.050 Test: blockdev write read max offset ...passed 00:21:01.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:01.050 Test: blockdev writev readv 8 blocks ...passed 00:21:01.050 Test: blockdev writev readv 30 x 1block ...passed 00:21:01.050 Test: blockdev writev readv block ...passed 00:21:01.050 Test: blockdev writev readv size > 128k ...passed 00:21:01.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:01.050 Test: blockdev comparev and writev ...[2024-10-07 09:42:00.599377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:01.050 [2024-10-07 09:42:00.599414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.050 [2024-10-07 09:42:00.599431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:01.050 [2024-10-07 09:42:00.599440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.050 [2024-10-07 09:42:00.599760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:01.050 [2024-10-07 09:42:00.599772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.050 [2024-10-07 09:42:00.599786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:01.050 [2024-10-07 09:42:00.599795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.050 [2024-10-07 09:42:00.600123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:01.050 [2024-10-07 09:42:00.600135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.050 [2024-10-07 09:42:00.600149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:01.050 [2024-10-07 09:42:00.600158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.050 [2024-10-07 09:42:00.600486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:01.050 [2024-10-07 09:42:00.600497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.051 [2024-10-07 09:42:00.600511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:01.051 [2024-10-07 09:42:00.600519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.051 passed 00:21:01.051 Test: blockdev nvme passthru rw ...passed 00:21:01.051 Test: blockdev nvme passthru vendor specific ...[2024-10-07 09:42:00.684030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:01.051 [2024-10-07 09:42:00.684046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.051 [2024-10-07 09:42:00.684269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:01.051 [2024-10-07 09:42:00.684280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.051 [2024-10-07 09:42:00.684546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:01.051 [2024-10-07 09:42:00.684557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.051 [2024-10-07 09:42:00.684831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:01.051 [2024-10-07 09:42:00.684842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.051 passed 00:21:01.051 Test: blockdev nvme admin passthru ...passed 00:21:01.313 Test: blockdev copy ...passed 00:21:01.313 00:21:01.313 Run Summary: Type Total Ran Passed Failed Inactive 00:21:01.313 suites 1 1 n/a 0 0 00:21:01.313 tests 23 23 23 0 0 00:21:01.313 asserts 152 152 152 0 n/a 00:21:01.313 00:21:01.313 Elapsed time = 1.134 seconds 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.573 rmmod nvme_tcp 00:21:01.573 rmmod nvme_fabrics 00:21:01.573 rmmod nvme_keyring 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 3370057 ']' 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 3370057 00:21:01.573 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' -z 3370057 ']' 00:21:01.574 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # kill -0 3370057 00:21:01.574 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # uname 00:21:01.574 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:01.574 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3370057 00:21:01.574 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # process_name=reactor_3 00:21:01.574 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@963 -- # '[' reactor_3 = sudo ']' 00:21:01.574 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3370057' 00:21:01.574 killing process with pid 3370057 00:21:01.574 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # kill 3370057 00:21:01.574 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@977 -- # wait 3370057 00:21:01.834 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:01.834 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:01.834 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:01.834 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:01.834 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:21:01.834 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:01.834 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:21:01.835 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:01.835 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:01.835 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.835 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.835 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.377 00:21:04.377 real 0m12.764s 00:21:04.377 user 0m13.759s 00:21:04.377 sys 0m6.892s 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # xtrace_disable 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:04.377 ************************************ 00:21:04.377 END TEST nvmf_bdevio_no_huge 00:21:04.377 ************************************ 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.377 ************************************ 00:21:04.377 START TEST nvmf_tls 00:21:04.377 ************************************ 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:04.377 * Looking for test storage... 00:21:04.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1626 -- # lcov --version 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:21:04.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.377 --rc genhtml_branch_coverage=1 00:21:04.377 --rc genhtml_function_coverage=1 00:21:04.377 --rc genhtml_legend=1 00:21:04.377 --rc geninfo_all_blocks=1 00:21:04.377 --rc geninfo_unexecuted_blocks=1 00:21:04.377 00:21:04.377 ' 00:21:04.377 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:21:04.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.377 --rc genhtml_branch_coverage=1 00:21:04.378 --rc genhtml_function_coverage=1 00:21:04.378 --rc genhtml_legend=1 00:21:04.378 --rc geninfo_all_blocks=1 00:21:04.378 --rc geninfo_unexecuted_blocks=1 00:21:04.378 00:21:04.378 ' 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:21:04.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.378 --rc genhtml_branch_coverage=1 00:21:04.378 --rc genhtml_function_coverage=1 00:21:04.378 --rc genhtml_legend=1 00:21:04.378 --rc geninfo_all_blocks=1 00:21:04.378 --rc geninfo_unexecuted_blocks=1 00:21:04.378 00:21:04.378 ' 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:21:04.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.378 --rc genhtml_branch_coverage=1 00:21:04.378 --rc genhtml_function_coverage=1 00:21:04.378 --rc genhtml_legend=1 00:21:04.378 --rc geninfo_all_blocks=1 00:21:04.378 --rc geninfo_unexecuted_blocks=1 00:21:04.378 00:21:04.378 ' 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.378 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.516 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:12.517 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:12.517 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:12.517 Found net devices under 0000:31:00.0: cvl_0_0 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:12.517 Found net devices under 0000:31:00.1: cvl_0_1 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:12.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:21:12.517 00:21:12.517 --- 10.0.0.2 ping statistics --- 00:21:12.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.517 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:21:12.517 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:21:12.517 00:21:12.517 --- 10.0.0.1 ping statistics --- 00:21:12.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.517 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3375432 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3375432 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3375432 ']' 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:12.518 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.518 [2024-10-07 09:42:11.704857] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:12.518 [2024-10-07 09:42:11.704932] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.518 [2024-10-07 09:42:11.779367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.518 [2024-10-07 09:42:11.871953] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.518 [2024-10-07 09:42:11.872014] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.518 [2024-10-07 09:42:11.872023] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.518 [2024-10-07 09:42:11.872036] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.518 [2024-10-07 09:42:11.872042] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.518 [2024-10-07 09:42:11.872836] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.089 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:13.089 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:21:13.089 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:13.089 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@733 -- # xtrace_disable 00:21:13.089 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.089 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.089 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:13.089 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:13.089 true 00:21:13.349 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.349 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:13.349 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:13.349 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:13.349 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:13.610 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.610 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:13.870 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:13.870 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:13.870 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:14.132 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.132 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:14.132 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:14.132 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:14.132 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.132 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:14.392 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:14.392 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:14.392 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:14.653 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:14.653 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.653 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:14.653 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:14.653 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:14.933 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.933 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.f9BBLwJz7u 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.AiMWEjyzjP 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.f9BBLwJz7u 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.AiMWEjyzjP 00:21:15.194 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:15.454 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:15.714 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.f9BBLwJz7u 00:21:15.714 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.f9BBLwJz7u 00:21:15.714 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:15.975 [2024-10-07 09:42:15.416587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.975 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:16.235 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:16.235 [2024-10-07 09:42:15.793460] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.235 [2024-10-07 09:42:15.793671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.235 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:16.495 malloc0 00:21:16.495 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:16.756 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.f9BBLwJz7u 00:21:16.756 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:17.016 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.f9BBLwJz7u 00:21:27.008 Initializing NVMe Controllers 00:21:27.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:27.008 Initialization complete. Launching workers. 00:21:27.008 ======================================================== 00:21:27.008 Latency(us) 00:21:27.008 Device Information : IOPS MiB/s Average min max 00:21:27.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18407.13 71.90 3477.15 1059.58 5248.27 00:21:27.008 ======================================================== 00:21:27.008 Total : 18407.13 71.90 3477.15 1059.58 5248.27 00:21:27.008 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f9BBLwJz7u 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.f9BBLwJz7u 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3378336 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3378336 /var/tmp/bdevperf.sock 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3378336 ']' 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:27.008 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.269 [2024-10-07 09:42:26.672440] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:27.269 [2024-10-07 09:42:26.672498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378336 ] 00:21:27.269 [2024-10-07 09:42:26.748422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.269 [2024-10-07 09:42:26.811439] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.840 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:27.840 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:21:27.840 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f9BBLwJz7u 00:21:28.101 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:28.361 [2024-10-07 09:42:27.807113] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:28.361 TLSTESTn1 00:21:28.361 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:28.361 Running I/O for 10 seconds... 00:21:38.653 4273.00 IOPS, 16.69 MiB/s 5187.00 IOPS, 20.26 MiB/s 5225.67 IOPS, 20.41 MiB/s 5381.75 IOPS, 21.02 MiB/s 5432.40 IOPS, 21.22 MiB/s 5570.17 IOPS, 21.76 MiB/s 5620.29 IOPS, 21.95 MiB/s 5616.00 IOPS, 21.94 MiB/s 5632.00 IOPS, 22.00 MiB/s 5646.50 IOPS, 22.06 MiB/s 00:21:38.653 Latency(us) 00:21:38.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.653 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:38.653 Verification LBA range: start 0x0 length 0x2000 00:21:38.653 TLSTESTn1 : 10.01 5651.04 22.07 0.00 0.00 22617.63 5734.40 52865.71 00:21:38.653 =================================================================================================================== 00:21:38.653 Total : 5651.04 22.07 0.00 0.00 22617.63 5734.40 52865.71 00:21:38.653 { 00:21:38.653 "results": [ 00:21:38.653 { 00:21:38.653 "job": "TLSTESTn1", 00:21:38.653 "core_mask": "0x4", 00:21:38.653 "workload": "verify", 00:21:38.653 "status": "finished", 00:21:38.653 "verify_range": { 00:21:38.653 "start": 0, 00:21:38.653 "length": 8192 00:21:38.653 }, 00:21:38.653 "queue_depth": 128, 00:21:38.653 "io_size": 4096, 00:21:38.653 "runtime": 10.014432, 00:21:38.653 "iops": 5651.044412703586, 00:21:38.653 "mibps": 22.074392237123384, 00:21:38.653 "io_failed": 0, 00:21:38.653 "io_timeout": 0, 00:21:38.653 "avg_latency_us": 22617.628951088493, 00:21:38.653 "min_latency_us": 5734.4, 00:21:38.653 "max_latency_us": 52865.706666666665 00:21:38.653 } 00:21:38.653 ], 00:21:38.653 "core_count": 1 00:21:38.653 } 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3378336 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3378336 ']' 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3378336 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3378336 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3378336' 00:21:38.653 killing process with pid 3378336 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3378336 00:21:38.653 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.653 00:21:38.653 Latency(us) 00:21:38.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.653 =================================================================================================================== 00:21:38.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3378336 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AiMWEjyzjP 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # local es=0 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AiMWEjyzjP 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@641 -- # local arg=run_bdevperf 00:21:38.653 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # type -t run_bdevperf 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AiMWEjyzjP 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AiMWEjyzjP 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3380516 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3380516 /var/tmp/bdevperf.sock 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3380516 ']' 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:38.654 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.654 [2024-10-07 09:42:38.294501] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:38.654 [2024-10-07 09:42:38.294556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380516 ] 00:21:38.913 [2024-10-07 09:42:38.372274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.913 [2024-10-07 09:42:38.422866] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.482 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:39.482 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:21:39.482 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AiMWEjyzjP 00:21:39.742 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:40.003 [2024-10-07 09:42:39.433619] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.003 [2024-10-07 09:42:39.438463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:40.003 [2024-10-07 09:42:39.438692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6c10 (107): Transport endpoint is not connected 00:21:40.003 [2024-10-07 09:42:39.439687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6c10 (9): Bad file descriptor 00:21:40.003 [2024-10-07 09:42:39.440689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:40.003 [2024-10-07 09:42:39.440696] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:40.003 [2024-10-07 09:42:39.440701] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:40.003 [2024-10-07 09:42:39.440709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:40.003 request: 00:21:40.003 { 00:21:40.003 "name": "TLSTEST", 00:21:40.003 "trtype": "tcp", 00:21:40.003 "traddr": "10.0.0.2", 00:21:40.003 "adrfam": "ipv4", 00:21:40.003 "trsvcid": "4420", 00:21:40.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.003 "prchk_reftag": false, 00:21:40.003 "prchk_guard": false, 00:21:40.003 "hdgst": false, 00:21:40.003 "ddgst": false, 00:21:40.003 "psk": "key0", 00:21:40.003 "allow_unrecognized_csi": false, 00:21:40.003 "method": "bdev_nvme_attach_controller", 00:21:40.003 "req_id": 1 00:21:40.003 } 00:21:40.003 Got JSON-RPC error response 00:21:40.003 response: 00:21:40.003 { 00:21:40.003 "code": -5, 00:21:40.003 "message": "Input/output error" 00:21:40.003 } 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3380516 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3380516 ']' 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3380516 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3380516 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3380516' 00:21:40.003 killing process with pid 3380516 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3380516 00:21:40.003 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.003 00:21:40.003 Latency(us) 00:21:40.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.003 =================================================================================================================== 00:21:40.003 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3380516 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # es=1 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.f9BBLwJz7u 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # local es=0 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.f9BBLwJz7u 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@641 -- # local arg=run_bdevperf 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # type -t run_bdevperf 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.f9BBLwJz7u 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:40.003 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.f9BBLwJz7u 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3380857 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3380857 /var/tmp/bdevperf.sock 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3380857 ']' 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:40.004 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.264 [2024-10-07 09:42:39.707361] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:40.264 [2024-10-07 09:42:39.707417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380857 ] 00:21:40.264 [2024-10-07 09:42:39.785032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.264 [2024-10-07 09:42:39.834586] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.206 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:41.206 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:21:41.206 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f9BBLwJz7u 00:21:41.206 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:41.206 [2024-10-07 09:42:40.829591] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.206 [2024-10-07 09:42:40.834140] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:41.206 [2024-10-07 09:42:40.834159] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:41.206 [2024-10-07 09:42:40.834179] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:41.206 [2024-10-07 09:42:40.834818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e3c10 (107): Transport endpoint is not connected 00:21:41.206 [2024-10-07 09:42:40.835813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e3c10 (9): Bad file descriptor 00:21:41.206 [2024-10-07 09:42:40.836814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:41.206 [2024-10-07 09:42:40.836821] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:41.206 [2024-10-07 09:42:40.836828] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:41.206 [2024-10-07 09:42:40.836836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.206 request: 00:21:41.206 { 00:21:41.206 "name": "TLSTEST", 00:21:41.206 "trtype": "tcp", 00:21:41.206 "traddr": "10.0.0.2", 00:21:41.206 "adrfam": "ipv4", 00:21:41.206 "trsvcid": "4420", 00:21:41.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.206 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:41.206 "prchk_reftag": false, 00:21:41.206 "prchk_guard": false, 00:21:41.206 "hdgst": false, 00:21:41.206 "ddgst": false, 00:21:41.206 "psk": "key0", 00:21:41.206 "allow_unrecognized_csi": false, 00:21:41.206 "method": "bdev_nvme_attach_controller", 00:21:41.206 "req_id": 1 00:21:41.206 } 00:21:41.206 Got JSON-RPC error response 00:21:41.206 response: 00:21:41.206 { 00:21:41.206 "code": -5, 00:21:41.206 "message": "Input/output error" 00:21:41.206 } 00:21:41.206 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3380857 00:21:41.206 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3380857 ']' 00:21:41.206 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3380857 00:21:41.467 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:21:41.467 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:41.467 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3380857 00:21:41.467 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:21:41.467 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:21:41.467 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3380857' 00:21:41.467 killing process with pid 3380857 00:21:41.467 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3380857 00:21:41.467 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.467 00:21:41.467 Latency(us) 00:21:41.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.467 =================================================================================================================== 00:21:41.467 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:41.467 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3380857 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # es=1 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.f9BBLwJz7u 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # local es=0 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.f9BBLwJz7u 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@641 -- # local arg=run_bdevperf 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # type -t run_bdevperf 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.f9BBLwJz7u 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.f9BBLwJz7u 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3381198 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3381198 /var/tmp/bdevperf.sock 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3381198 ']' 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:41.467 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.467 [2024-10-07 09:42:41.096153] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:41.468 [2024-10-07 09:42:41.096209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381198 ] 00:21:41.728 [2024-10-07 09:42:41.175174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.728 [2024-10-07 09:42:41.225373] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.300 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:42.300 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:21:42.300 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f9BBLwJz7u 00:21:42.560 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:42.820 [2024-10-07 09:42:42.231942] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.820 [2024-10-07 09:42:42.236530] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:42.820 [2024-10-07 09:42:42.236548] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:42.820 [2024-10-07 09:42:42.236568] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:42.820 [2024-10-07 09:42:42.237215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1231c10 (107): Transport endpoint is not connected 00:21:42.820 [2024-10-07 09:42:42.238211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1231c10 (9): Bad file descriptor 00:21:42.820 [2024-10-07 09:42:42.239212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:42.820 [2024-10-07 09:42:42.239221] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:42.820 [2024-10-07 09:42:42.239227] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:42.820 [2024-10-07 09:42:42.239236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:42.820 request: 00:21:42.820 { 00:21:42.820 "name": "TLSTEST", 00:21:42.820 "trtype": "tcp", 00:21:42.820 "traddr": "10.0.0.2", 00:21:42.820 "adrfam": "ipv4", 00:21:42.820 "trsvcid": "4420", 00:21:42.820 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.820 "prchk_reftag": false, 00:21:42.820 "prchk_guard": false, 00:21:42.820 "hdgst": false, 00:21:42.820 "ddgst": false, 00:21:42.820 "psk": "key0", 00:21:42.820 "allow_unrecognized_csi": false, 00:21:42.820 "method": "bdev_nvme_attach_controller", 00:21:42.820 "req_id": 1 00:21:42.820 } 00:21:42.820 Got JSON-RPC error response 00:21:42.820 response: 00:21:42.820 { 00:21:42.820 "code": -5, 00:21:42.820 "message": "Input/output error" 00:21:42.820 } 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3381198 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3381198 ']' 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3381198 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3381198 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3381198' 00:21:42.820 killing process with pid 3381198 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3381198 00:21:42.820 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.820 00:21:42.820 Latency(us) 00:21:42.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.820 =================================================================================================================== 00:21:42.820 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3381198 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # es=1 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # local es=0 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@641 -- # local arg=run_bdevperf 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # type -t run_bdevperf 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3381479 00:21:42.820 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:42.821 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3381479 /var/tmp/bdevperf.sock 00:21:42.821 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:42.821 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3381479 ']' 00:21:42.821 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.821 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:42.821 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.821 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:42.821 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.081 [2024-10-07 09:42:42.498940] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:43.081 [2024-10-07 09:42:42.498995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381479 ] 00:21:43.081 [2024-10-07 09:42:42.575915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.081 [2024-10-07 09:42:42.627179] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.651 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:43.651 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:21:43.651 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:43.911 [2024-10-07 09:42:43.445745] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:43.911 [2024-10-07 09:42:43.445771] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:43.911 request: 00:21:43.911 { 00:21:43.911 "name": "key0", 00:21:43.911 "path": "", 00:21:43.911 "method": "keyring_file_add_key", 00:21:43.911 "req_id": 1 00:21:43.911 } 00:21:43.911 Got JSON-RPC error response 00:21:43.911 response: 00:21:43.911 { 00:21:43.911 "code": -1, 00:21:43.911 "message": "Operation not permitted" 00:21:43.911 } 00:21:43.911 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:44.171 [2024-10-07 09:42:43.622263] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.171 [2024-10-07 09:42:43.622285] bdev_nvme.c:6412:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:44.171 request: 00:21:44.171 { 00:21:44.171 "name": "TLSTEST", 00:21:44.171 "trtype": "tcp", 00:21:44.171 "traddr": "10.0.0.2", 00:21:44.171 "adrfam": "ipv4", 00:21:44.171 "trsvcid": "4420", 00:21:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.171 "prchk_reftag": false, 00:21:44.171 "prchk_guard": false, 00:21:44.171 "hdgst": false, 00:21:44.171 "ddgst": false, 00:21:44.171 "psk": "key0", 00:21:44.171 "allow_unrecognized_csi": false, 00:21:44.171 "method": "bdev_nvme_attach_controller", 00:21:44.171 "req_id": 1 00:21:44.171 } 00:21:44.171 Got JSON-RPC error response 00:21:44.171 response: 00:21:44.171 { 00:21:44.171 "code": -126, 00:21:44.171 "message": "Required key not available" 00:21:44.171 } 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3381479 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3381479 ']' 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3381479 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3381479 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3381479' 00:21:44.171 killing process with pid 3381479 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3381479 00:21:44.171 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.171 00:21:44.171 Latency(us) 00:21:44.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.171 =================================================================================================================== 00:21:44.171 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3381479 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # es=1 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3375432 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3375432 ']' 00:21:44.171 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3375432 00:21:44.432 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:21:44.432 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:44.432 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3375432 00:21:44.432 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:21:44.432 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:21:44.432 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3375432' 00:21:44.432 killing process with pid 3375432 00:21:44.432 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3375432 00:21:44.432 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3375432 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.pnUzoXj89B 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.pnUzoXj89B 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3381786 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3381786 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3381786 ']' 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:44.432 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.692 [2024-10-07 09:42:44.137730] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:44.692 [2024-10-07 09:42:44.137794] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.692 [2024-10-07 09:42:44.224139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.692 [2024-10-07 09:42:44.284683] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.692 [2024-10-07 09:42:44.284721] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.692 [2024-10-07 09:42:44.284727] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.692 [2024-10-07 09:42:44.284732] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.692 [2024-10-07 09:42:44.284736] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.692 [2024-10-07 09:42:44.285224] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.633 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:45.633 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:21:45.633 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:45.633 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@733 -- # xtrace_disable 00:21:45.633 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.633 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.633 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.pnUzoXj89B 00:21:45.633 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pnUzoXj89B 00:21:45.633 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:45.633 [2024-10-07 09:42:45.121811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.633 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:45.894 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:45.894 [2024-10-07 09:42:45.490726] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.894 [2024-10-07 09:42:45.490925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.894 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:46.154 malloc0 00:21:46.154 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:46.414 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pnUzoXj89B 00:21:46.673 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:46.673 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pnUzoXj89B 00:21:46.673 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:46.673 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:46.673 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:46.673 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pnUzoXj89B 00:21:46.673 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.673 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.673 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3382256 00:21:46.673 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.674 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3382256 /var/tmp/bdevperf.sock 00:21:46.674 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3382256 ']' 00:21:46.674 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.674 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:46.674 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.674 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:46.674 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.674 [2024-10-07 09:42:46.255500] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:46.674 [2024-10-07 09:42:46.255543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382256 ] 00:21:46.674 [2024-10-07 09:42:46.322227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.933 [2024-10-07 09:42:46.374170] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.933 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:46.933 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:21:46.933 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pnUzoXj89B 00:21:47.193 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:47.193 [2024-10-07 09:42:46.759231] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.193 TLSTESTn1 00:21:47.452 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:47.452 Running I/O for 10 seconds... 00:21:57.420 5349.00 IOPS, 20.89 MiB/s 4769.00 IOPS, 18.63 MiB/s 5311.67 IOPS, 20.75 MiB/s 5309.00 IOPS, 20.74 MiB/s 5386.00 IOPS, 21.04 MiB/s 5451.67 IOPS, 21.30 MiB/s 5607.86 IOPS, 21.91 MiB/s 5574.12 IOPS, 21.77 MiB/s 5544.67 IOPS, 21.66 MiB/s 5576.00 IOPS, 21.78 MiB/s 00:21:57.420 Latency(us) 00:21:57.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.420 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:57.420 Verification LBA range: start 0x0 length 0x2000 00:21:57.420 TLSTESTn1 : 10.05 5562.86 21.73 0.00 0.00 22950.15 4450.99 55924.05 00:21:57.420 =================================================================================================================== 00:21:57.420 Total : 5562.86 21.73 0.00 0.00 22950.15 4450.99 55924.05 00:21:57.420 { 00:21:57.420 "results": [ 00:21:57.420 { 00:21:57.420 "job": "TLSTESTn1", 00:21:57.420 "core_mask": "0x4", 00:21:57.420 "workload": "verify", 00:21:57.420 "status": "finished", 00:21:57.420 "verify_range": { 00:21:57.420 "start": 0, 00:21:57.420 "length": 8192 00:21:57.420 }, 00:21:57.420 "queue_depth": 128, 00:21:57.420 "io_size": 4096, 00:21:57.420 "runtime": 10.046636, 00:21:57.420 "iops": 5562.857059815843, 00:21:57.420 "mibps": 21.729910389905637, 00:21:57.420 "io_failed": 0, 00:21:57.420 "io_timeout": 0, 00:21:57.420 "avg_latency_us": 22950.14812863823, 00:21:57.420 "min_latency_us": 4450.986666666667, 00:21:57.420 "max_latency_us": 55924.05333333334 00:21:57.420 } 00:21:57.420 ], 00:21:57.420 "core_count": 1 00:21:57.420 } 00:21:57.420 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.420 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3382256 00:21:57.420 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3382256 ']' 00:21:57.420 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3382256 00:21:57.420 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:21:57.420 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:57.420 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3382256 00:21:57.681 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:21:57.681 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:21:57.681 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3382256' 00:21:57.681 killing process with pid 3382256 00:21:57.681 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3382256 00:21:57.682 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.682 00:21:57.682 Latency(us) 00:21:57.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.682 =================================================================================================================== 00:21:57.682 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3382256 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.pnUzoXj89B 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pnUzoXj89B 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # local es=0 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pnUzoXj89B 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@641 -- # local arg=run_bdevperf 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # type -t run_bdevperf 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pnUzoXj89B 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pnUzoXj89B 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3384283 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3384283 /var/tmp/bdevperf.sock 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3384283 ']' 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:57.682 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.682 [2024-10-07 09:42:57.266187] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:57.682 [2024-10-07 09:42:57.266240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384283 ] 00:21:57.942 [2024-10-07 09:42:57.344582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.942 [2024-10-07 09:42:57.395109] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.512 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:58.512 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:21:58.512 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pnUzoXj89B 00:21:58.773 [2024-10-07 09:42:58.217339] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pnUzoXj89B': 0100666 00:21:58.773 [2024-10-07 09:42:58.217366] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:58.773 request: 00:21:58.773 { 00:21:58.773 "name": "key0", 00:21:58.773 "path": "/tmp/tmp.pnUzoXj89B", 00:21:58.773 "method": "keyring_file_add_key", 00:21:58.773 "req_id": 1 00:21:58.773 } 00:21:58.773 Got JSON-RPC error response 00:21:58.773 response: 00:21:58.773 { 00:21:58.773 "code": -1, 00:21:58.773 "message": "Operation not permitted" 00:21:58.773 } 00:21:58.773 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:58.773 [2024-10-07 09:42:58.401871] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.773 [2024-10-07 09:42:58.401891] bdev_nvme.c:6412:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:58.773 request: 00:21:58.773 { 00:21:58.773 "name": "TLSTEST", 00:21:58.773 "trtype": "tcp", 00:21:58.773 "traddr": "10.0.0.2", 00:21:58.773 "adrfam": "ipv4", 00:21:58.773 "trsvcid": "4420", 00:21:58.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.773 "prchk_reftag": false, 00:21:58.773 "prchk_guard": false, 00:21:58.773 "hdgst": false, 00:21:58.773 "ddgst": false, 00:21:58.773 "psk": "key0", 00:21:58.773 "allow_unrecognized_csi": false, 00:21:58.773 "method": "bdev_nvme_attach_controller", 00:21:58.773 "req_id": 1 00:21:58.773 } 00:21:58.773 Got JSON-RPC error response 00:21:58.773 response: 00:21:58.773 { 00:21:58.773 "code": -126, 00:21:58.773 "message": "Required key not available" 00:21:58.773 } 00:21:58.773 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3384283 00:21:58.773 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3384283 ']' 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3384283 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3384283 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3384283' 00:21:59.033 killing process with pid 3384283 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3384283 00:21:59.033 Received shutdown signal, test time was about 10.000000 seconds 00:21:59.033 00:21:59.033 Latency(us) 00:21:59.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.033 =================================================================================================================== 00:21:59.033 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3384283 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # es=1 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3381786 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3381786 ']' 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3381786 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3381786 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3381786' 00:21:59.033 killing process with pid 3381786 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3381786 00:21:59.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3381786 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3384629 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3384629 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3384629 ']' 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:59.294 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.294 [2024-10-07 09:42:58.854904] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:59.294 [2024-10-07 09:42:58.854955] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.294 [2024-10-07 09:42:58.915554] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.555 [2024-10-07 09:42:58.967418] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.555 [2024-10-07 09:42:58.967454] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.555 [2024-10-07 09:42:58.967461] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.555 [2024-10-07 09:42:58.967465] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.555 [2024-10-07 09:42:58.967469] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.555 [2024-10-07 09:42:58.967948] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@733 -- # xtrace_disable 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.pnUzoXj89B 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # local es=0 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.pnUzoXj89B 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@641 -- # local arg=setup_nvmf_tgt 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # type -t setup_nvmf_tgt 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # setup_nvmf_tgt /tmp/tmp.pnUzoXj89B 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pnUzoXj89B 00:21:59.555 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:59.816 [2024-10-07 09:42:59.249826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.816 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:59.816 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:00.077 [2024-10-07 09:42:59.618728] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:00.077 [2024-10-07 09:42:59.618932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.077 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:00.337 malloc0 00:22:00.337 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:00.599 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pnUzoXj89B 00:22:00.599 [2024-10-07 09:43:00.179561] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pnUzoXj89B': 0100666 00:22:00.599 [2024-10-07 09:43:00.179587] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:00.599 request: 00:22:00.599 { 00:22:00.599 "name": "key0", 00:22:00.599 "path": "/tmp/tmp.pnUzoXj89B", 00:22:00.599 "method": "keyring_file_add_key", 00:22:00.599 "req_id": 1 00:22:00.599 } 00:22:00.599 Got JSON-RPC error response 00:22:00.599 response: 00:22:00.599 { 00:22:00.599 "code": -1, 00:22:00.599 "message": "Operation not permitted" 00:22:00.599 } 00:22:00.599 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:00.860 [2024-10-07 09:43:00.364028] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:00.860 [2024-10-07 09:43:00.364053] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:00.860 request: 00:22:00.860 { 00:22:00.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.860 "host": "nqn.2016-06.io.spdk:host1", 00:22:00.860 "psk": "key0", 00:22:00.860 "method": "nvmf_subsystem_add_host", 00:22:00.860 "req_id": 1 00:22:00.860 } 00:22:00.860 Got JSON-RPC error response 00:22:00.860 response: 00:22:00.860 { 00:22:00.860 "code": -32603, 00:22:00.860 "message": "Internal error" 00:22:00.860 } 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@656 -- # es=1 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3384629 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3384629 ']' 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3384629 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3384629 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3384629' 00:22:00.860 killing process with pid 3384629 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3384629 00:22:00.860 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3384629 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.pnUzoXj89B 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3385000 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3385000 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3385000 ']' 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:01.121 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.121 [2024-10-07 09:43:00.649498] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:01.121 [2024-10-07 09:43:00.649551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.121 [2024-10-07 09:43:00.731988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.381 [2024-10-07 09:43:00.785281] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.381 [2024-10-07 09:43:00.785316] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.381 [2024-10-07 09:43:00.785322] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.381 [2024-10-07 09:43:00.785327] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.381 [2024-10-07 09:43:00.785330] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.381 [2024-10-07 09:43:00.785819] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.953 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:01.953 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:22:01.953 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:01.953 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@733 -- # xtrace_disable 00:22:01.953 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.953 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.953 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.pnUzoXj89B 00:22:01.953 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pnUzoXj89B 00:22:01.953 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:02.213 [2024-10-07 09:43:01.629100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.213 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:02.214 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:02.473 [2024-10-07 09:43:01.998042] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.473 [2024-10-07 09:43:01.998241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.473 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:02.733 malloc0 00:22:02.733 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:02.993 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pnUzoXj89B 00:22:02.993 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:03.254 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.254 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3385402 00:22:03.254 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.254 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3385402 /var/tmp/bdevperf.sock 00:22:03.254 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3385402 ']' 00:22:03.254 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.254 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:03.254 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.255 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:03.255 09:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.255 [2024-10-07 09:43:02.814154] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:03.255 [2024-10-07 09:43:02.814207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385402 ] 00:22:03.255 [2024-10-07 09:43:02.892198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.517 [2024-10-07 09:43:02.955482] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.089 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:04.089 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:22:04.089 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pnUzoXj89B 00:22:04.350 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:04.350 [2024-10-07 09:43:03.947122] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.611 TLSTESTn1 00:22:04.611 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:04.873 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:04.873 "subsystems": [ 00:22:04.873 { 00:22:04.873 "subsystem": "keyring", 00:22:04.873 "config": [ 00:22:04.873 { 00:22:04.873 "method": "keyring_file_add_key", 00:22:04.873 "params": { 00:22:04.873 "name": "key0", 00:22:04.873 "path": "/tmp/tmp.pnUzoXj89B" 00:22:04.873 } 00:22:04.873 } 00:22:04.873 ] 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "subsystem": "iobuf", 00:22:04.873 "config": [ 00:22:04.873 { 00:22:04.873 "method": "iobuf_set_options", 00:22:04.873 "params": { 00:22:04.873 "small_pool_count": 8192, 00:22:04.873 "large_pool_count": 1024, 00:22:04.873 "small_bufsize": 8192, 00:22:04.873 "large_bufsize": 135168 00:22:04.873 } 00:22:04.873 } 00:22:04.873 ] 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "subsystem": "sock", 00:22:04.873 "config": [ 00:22:04.873 { 00:22:04.873 "method": "sock_set_default_impl", 00:22:04.873 "params": { 00:22:04.873 "impl_name": "posix" 00:22:04.873 } 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "method": "sock_impl_set_options", 00:22:04.873 "params": { 00:22:04.873 "impl_name": "ssl", 00:22:04.873 "recv_buf_size": 4096, 00:22:04.873 "send_buf_size": 4096, 00:22:04.873 "enable_recv_pipe": true, 00:22:04.873 "enable_quickack": false, 00:22:04.873 "enable_placement_id": 0, 00:22:04.873 "enable_zerocopy_send_server": true, 00:22:04.873 "enable_zerocopy_send_client": false, 00:22:04.873 "zerocopy_threshold": 0, 00:22:04.873 "tls_version": 0, 00:22:04.873 "enable_ktls": false 00:22:04.873 } 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "method": "sock_impl_set_options", 00:22:04.873 "params": { 00:22:04.873 "impl_name": "posix", 00:22:04.873 "recv_buf_size": 2097152, 00:22:04.873 "send_buf_size": 2097152, 00:22:04.873 "enable_recv_pipe": true, 00:22:04.873 "enable_quickack": false, 00:22:04.873 "enable_placement_id": 0, 00:22:04.873 "enable_zerocopy_send_server": true, 00:22:04.873 "enable_zerocopy_send_client": false, 00:22:04.873 "zerocopy_threshold": 0, 00:22:04.873 "tls_version": 0, 00:22:04.873 "enable_ktls": false 00:22:04.873 } 00:22:04.873 } 00:22:04.873 ] 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "subsystem": "vmd", 00:22:04.873 "config": [] 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "subsystem": "accel", 00:22:04.873 "config": [ 00:22:04.873 { 00:22:04.873 "method": "accel_set_options", 00:22:04.873 "params": { 00:22:04.873 "small_cache_size": 128, 00:22:04.873 "large_cache_size": 16, 00:22:04.873 "task_count": 2048, 00:22:04.873 "sequence_count": 2048, 00:22:04.873 "buf_count": 2048 00:22:04.873 } 00:22:04.873 } 00:22:04.873 ] 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "subsystem": "bdev", 00:22:04.873 "config": [ 00:22:04.873 { 00:22:04.873 "method": "bdev_set_options", 00:22:04.873 "params": { 00:22:04.873 "bdev_io_pool_size": 65535, 00:22:04.873 "bdev_io_cache_size": 256, 00:22:04.873 "bdev_auto_examine": true, 00:22:04.873 "iobuf_small_cache_size": 128, 00:22:04.873 "iobuf_large_cache_size": 16 00:22:04.873 } 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "method": "bdev_raid_set_options", 00:22:04.873 "params": { 00:22:04.873 "process_window_size_kb": 1024, 00:22:04.873 "process_max_bandwidth_mb_sec": 0 00:22:04.873 } 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "method": "bdev_iscsi_set_options", 00:22:04.873 "params": { 00:22:04.873 "timeout_sec": 30 00:22:04.873 } 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "method": "bdev_nvme_set_options", 00:22:04.873 "params": { 00:22:04.873 "action_on_timeout": "none", 00:22:04.873 "timeout_us": 0, 00:22:04.873 "timeout_admin_us": 0, 00:22:04.873 "keep_alive_timeout_ms": 10000, 00:22:04.873 "arbitration_burst": 0, 00:22:04.873 "low_priority_weight": 0, 00:22:04.873 "medium_priority_weight": 0, 00:22:04.873 "high_priority_weight": 0, 00:22:04.873 "nvme_adminq_poll_period_us": 10000, 00:22:04.873 "nvme_ioq_poll_period_us": 0, 00:22:04.873 "io_queue_requests": 0, 00:22:04.873 "delay_cmd_submit": true, 00:22:04.873 "transport_retry_count": 4, 00:22:04.873 "bdev_retry_count": 3, 00:22:04.873 "transport_ack_timeout": 0, 00:22:04.873 "ctrlr_loss_timeout_sec": 0, 00:22:04.873 "reconnect_delay_sec": 0, 00:22:04.873 "fast_io_fail_timeout_sec": 0, 00:22:04.873 "disable_auto_failback": false, 00:22:04.873 "generate_uuids": false, 00:22:04.873 "transport_tos": 0, 00:22:04.873 "nvme_error_stat": false, 00:22:04.873 "rdma_srq_size": 0, 00:22:04.873 "io_path_stat": false, 00:22:04.873 "allow_accel_sequence": false, 00:22:04.873 "rdma_max_cq_size": 0, 00:22:04.873 "rdma_cm_event_timeout_ms": 0, 00:22:04.873 "dhchap_digests": [ 00:22:04.873 "sha256", 00:22:04.873 "sha384", 00:22:04.873 "sha512" 00:22:04.873 ], 00:22:04.873 "dhchap_dhgroups": [ 00:22:04.873 "null", 00:22:04.873 "ffdhe2048", 00:22:04.873 "ffdhe3072", 00:22:04.873 "ffdhe4096", 00:22:04.873 "ffdhe6144", 00:22:04.873 "ffdhe8192" 00:22:04.873 ] 00:22:04.873 } 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "method": "bdev_nvme_set_hotplug", 00:22:04.873 "params": { 00:22:04.873 "period_us": 100000, 00:22:04.873 "enable": false 00:22:04.873 } 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "method": "bdev_malloc_create", 00:22:04.873 "params": { 00:22:04.873 "name": "malloc0", 00:22:04.873 "num_blocks": 8192, 00:22:04.873 "block_size": 4096, 00:22:04.873 "physical_block_size": 4096, 00:22:04.873 "uuid": "64f36b38-0800-49e0-88e4-d239b6bc02d0", 00:22:04.873 "optimal_io_boundary": 0, 00:22:04.873 "md_size": 0, 00:22:04.873 "dif_type": 0, 00:22:04.873 "dif_is_head_of_md": false, 00:22:04.873 "dif_pi_format": 0 00:22:04.873 } 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "method": "bdev_wait_for_examine" 00:22:04.873 } 00:22:04.873 ] 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "subsystem": "nbd", 00:22:04.873 "config": [] 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "subsystem": "scheduler", 00:22:04.873 "config": [ 00:22:04.873 { 00:22:04.873 "method": "framework_set_scheduler", 00:22:04.873 "params": { 00:22:04.873 "name": "static" 00:22:04.873 } 00:22:04.873 } 00:22:04.873 ] 00:22:04.873 }, 00:22:04.873 { 00:22:04.873 "subsystem": "nvmf", 00:22:04.873 "config": [ 00:22:04.873 { 00:22:04.873 "method": "nvmf_set_config", 00:22:04.873 "params": { 00:22:04.873 "discovery_filter": "match_any", 00:22:04.873 "admin_cmd_passthru": { 00:22:04.873 "identify_ctrlr": false 00:22:04.873 }, 00:22:04.873 "dhchap_digests": [ 00:22:04.873 "sha256", 00:22:04.873 "sha384", 00:22:04.873 "sha512" 00:22:04.874 ], 00:22:04.874 "dhchap_dhgroups": [ 00:22:04.874 "null", 00:22:04.874 "ffdhe2048", 00:22:04.874 "ffdhe3072", 00:22:04.874 "ffdhe4096", 00:22:04.874 "ffdhe6144", 00:22:04.874 "ffdhe8192" 00:22:04.874 ] 00:22:04.874 } 00:22:04.874 }, 00:22:04.874 { 00:22:04.874 "method": "nvmf_set_max_subsystems", 00:22:04.874 "params": { 00:22:04.874 "max_subsystems": 1024 00:22:04.874 } 00:22:04.874 }, 00:22:04.874 { 00:22:04.874 "method": "nvmf_set_crdt", 00:22:04.874 "params": { 00:22:04.874 "crdt1": 0, 00:22:04.874 "crdt2": 0, 00:22:04.874 "crdt3": 0 00:22:04.874 } 00:22:04.874 }, 00:22:04.874 { 00:22:04.874 "method": "nvmf_create_transport", 00:22:04.874 "params": { 00:22:04.874 "trtype": "TCP", 00:22:04.874 "max_queue_depth": 128, 00:22:04.874 "max_io_qpairs_per_ctrlr": 127, 00:22:04.874 "in_capsule_data_size": 4096, 00:22:04.874 "max_io_size": 131072, 00:22:04.874 "io_unit_size": 131072, 00:22:04.874 "max_aq_depth": 128, 00:22:04.874 "num_shared_buffers": 511, 00:22:04.874 "buf_cache_size": 4294967295, 00:22:04.874 "dif_insert_or_strip": false, 00:22:04.874 "zcopy": false, 00:22:04.874 "c2h_success": false, 00:22:04.874 "sock_priority": 0, 00:22:04.874 "abort_timeout_sec": 1, 00:22:04.874 "ack_timeout": 0, 00:22:04.874 "data_wr_pool_size": 0 00:22:04.874 } 00:22:04.874 }, 00:22:04.874 { 00:22:04.874 "method": "nvmf_create_subsystem", 00:22:04.874 "params": { 00:22:04.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.874 "allow_any_host": false, 00:22:04.874 "serial_number": "SPDK00000000000001", 00:22:04.874 "model_number": "SPDK bdev Controller", 00:22:04.874 "max_namespaces": 10, 00:22:04.874 "min_cntlid": 1, 00:22:04.874 "max_cntlid": 65519, 00:22:04.874 "ana_reporting": false 00:22:04.874 } 00:22:04.874 }, 00:22:04.874 { 00:22:04.874 "method": "nvmf_subsystem_add_host", 00:22:04.874 "params": { 00:22:04.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.874 "host": "nqn.2016-06.io.spdk:host1", 00:22:04.874 "psk": "key0" 00:22:04.874 } 00:22:04.874 }, 00:22:04.874 { 00:22:04.874 "method": "nvmf_subsystem_add_ns", 00:22:04.874 "params": { 00:22:04.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.874 "namespace": { 00:22:04.874 "nsid": 1, 00:22:04.874 "bdev_name": "malloc0", 00:22:04.874 "nguid": "64F36B38080049E088E4D239B6BC02D0", 00:22:04.874 "uuid": "64f36b38-0800-49e0-88e4-d239b6bc02d0", 00:22:04.874 "no_auto_visible": false 00:22:04.874 } 00:22:04.874 } 00:22:04.874 }, 00:22:04.874 { 00:22:04.874 "method": "nvmf_subsystem_add_listener", 00:22:04.874 "params": { 00:22:04.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.874 "listen_address": { 00:22:04.874 "trtype": "TCP", 00:22:04.874 "adrfam": "IPv4", 00:22:04.874 "traddr": "10.0.0.2", 00:22:04.874 "trsvcid": "4420" 00:22:04.874 }, 00:22:04.874 "secure_channel": true 00:22:04.874 } 00:22:04.874 } 00:22:04.874 ] 00:22:04.874 } 00:22:04.874 ] 00:22:04.874 }' 00:22:04.874 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:05.135 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:05.135 "subsystems": [ 00:22:05.135 { 00:22:05.135 "subsystem": "keyring", 00:22:05.135 "config": [ 00:22:05.135 { 00:22:05.135 "method": "keyring_file_add_key", 00:22:05.135 "params": { 00:22:05.135 "name": "key0", 00:22:05.135 "path": "/tmp/tmp.pnUzoXj89B" 00:22:05.135 } 00:22:05.135 } 00:22:05.135 ] 00:22:05.135 }, 00:22:05.135 { 00:22:05.135 "subsystem": "iobuf", 00:22:05.135 "config": [ 00:22:05.135 { 00:22:05.135 "method": "iobuf_set_options", 00:22:05.135 "params": { 00:22:05.135 "small_pool_count": 8192, 00:22:05.135 "large_pool_count": 1024, 00:22:05.135 "small_bufsize": 8192, 00:22:05.135 "large_bufsize": 135168 00:22:05.135 } 00:22:05.135 } 00:22:05.135 ] 00:22:05.135 }, 00:22:05.135 { 00:22:05.135 "subsystem": "sock", 00:22:05.135 "config": [ 00:22:05.135 { 00:22:05.135 "method": "sock_set_default_impl", 00:22:05.135 "params": { 00:22:05.135 "impl_name": "posix" 00:22:05.135 } 00:22:05.135 }, 00:22:05.135 { 00:22:05.135 "method": "sock_impl_set_options", 00:22:05.135 "params": { 00:22:05.135 "impl_name": "ssl", 00:22:05.135 "recv_buf_size": 4096, 00:22:05.135 "send_buf_size": 4096, 00:22:05.135 "enable_recv_pipe": true, 00:22:05.135 "enable_quickack": false, 00:22:05.135 "enable_placement_id": 0, 00:22:05.135 "enable_zerocopy_send_server": true, 00:22:05.135 "enable_zerocopy_send_client": false, 00:22:05.135 "zerocopy_threshold": 0, 00:22:05.135 "tls_version": 0, 00:22:05.135 "enable_ktls": false 00:22:05.135 } 00:22:05.135 }, 00:22:05.135 { 00:22:05.135 "method": "sock_impl_set_options", 00:22:05.135 "params": { 00:22:05.135 "impl_name": "posix", 00:22:05.135 "recv_buf_size": 2097152, 00:22:05.135 "send_buf_size": 2097152, 00:22:05.135 "enable_recv_pipe": true, 00:22:05.135 "enable_quickack": false, 00:22:05.135 "enable_placement_id": 0, 00:22:05.135 "enable_zerocopy_send_server": true, 00:22:05.135 "enable_zerocopy_send_client": false, 00:22:05.135 "zerocopy_threshold": 0, 00:22:05.135 "tls_version": 0, 00:22:05.135 "enable_ktls": false 00:22:05.135 } 00:22:05.135 } 00:22:05.135 ] 00:22:05.135 }, 00:22:05.135 { 00:22:05.135 "subsystem": "vmd", 00:22:05.135 "config": [] 00:22:05.135 }, 00:22:05.135 { 00:22:05.135 "subsystem": "accel", 00:22:05.135 "config": [ 00:22:05.135 { 00:22:05.135 "method": "accel_set_options", 00:22:05.135 "params": { 00:22:05.135 "small_cache_size": 128, 00:22:05.135 "large_cache_size": 16, 00:22:05.135 "task_count": 2048, 00:22:05.135 "sequence_count": 2048, 00:22:05.135 "buf_count": 2048 00:22:05.135 } 00:22:05.135 } 00:22:05.135 ] 00:22:05.135 }, 00:22:05.135 { 00:22:05.135 "subsystem": "bdev", 00:22:05.135 "config": [ 00:22:05.135 { 00:22:05.135 "method": "bdev_set_options", 00:22:05.135 "params": { 00:22:05.135 "bdev_io_pool_size": 65535, 00:22:05.135 "bdev_io_cache_size": 256, 00:22:05.135 "bdev_auto_examine": true, 00:22:05.135 "iobuf_small_cache_size": 128, 00:22:05.135 "iobuf_large_cache_size": 16 00:22:05.135 } 00:22:05.135 }, 00:22:05.135 { 00:22:05.135 "method": "bdev_raid_set_options", 00:22:05.135 "params": { 00:22:05.135 "process_window_size_kb": 1024, 00:22:05.135 "process_max_bandwidth_mb_sec": 0 00:22:05.135 } 00:22:05.135 }, 00:22:05.135 { 00:22:05.135 "method": "bdev_iscsi_set_options", 00:22:05.135 "params": { 00:22:05.135 "timeout_sec": 30 00:22:05.135 } 00:22:05.135 }, 00:22:05.135 { 00:22:05.135 "method": "bdev_nvme_set_options", 00:22:05.135 "params": { 00:22:05.135 "action_on_timeout": "none", 00:22:05.135 "timeout_us": 0, 00:22:05.135 "timeout_admin_us": 0, 00:22:05.135 "keep_alive_timeout_ms": 10000, 00:22:05.135 "arbitration_burst": 0, 00:22:05.135 "low_priority_weight": 0, 00:22:05.135 "medium_priority_weight": 0, 00:22:05.135 "high_priority_weight": 0, 00:22:05.135 "nvme_adminq_poll_period_us": 10000, 00:22:05.135 "nvme_ioq_poll_period_us": 0, 00:22:05.135 "io_queue_requests": 512, 00:22:05.135 "delay_cmd_submit": true, 00:22:05.135 "transport_retry_count": 4, 00:22:05.135 "bdev_retry_count": 3, 00:22:05.135 "transport_ack_timeout": 0, 00:22:05.135 "ctrlr_loss_timeout_sec": 0, 00:22:05.135 "reconnect_delay_sec": 0, 00:22:05.135 "fast_io_fail_timeout_sec": 0, 00:22:05.135 "disable_auto_failback": false, 00:22:05.135 "generate_uuids": false, 00:22:05.136 "transport_tos": 0, 00:22:05.136 "nvme_error_stat": false, 00:22:05.136 "rdma_srq_size": 0, 00:22:05.136 "io_path_stat": false, 00:22:05.136 "allow_accel_sequence": false, 00:22:05.136 "rdma_max_cq_size": 0, 00:22:05.136 "rdma_cm_event_timeout_ms": 0, 00:22:05.136 "dhchap_digests": [ 00:22:05.136 "sha256", 00:22:05.136 "sha384", 00:22:05.136 "sha512" 00:22:05.136 ], 00:22:05.136 "dhchap_dhgroups": [ 00:22:05.136 "null", 00:22:05.136 "ffdhe2048", 00:22:05.136 "ffdhe3072", 00:22:05.136 "ffdhe4096", 00:22:05.136 "ffdhe6144", 00:22:05.136 "ffdhe8192" 00:22:05.136 ] 00:22:05.136 } 00:22:05.136 }, 00:22:05.136 { 00:22:05.136 "method": "bdev_nvme_attach_controller", 00:22:05.136 "params": { 00:22:05.136 "name": "TLSTEST", 00:22:05.136 "trtype": "TCP", 00:22:05.136 "adrfam": "IPv4", 00:22:05.136 "traddr": "10.0.0.2", 00:22:05.136 "trsvcid": "4420", 00:22:05.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.136 "prchk_reftag": false, 00:22:05.136 "prchk_guard": false, 00:22:05.136 "ctrlr_loss_timeout_sec": 0, 00:22:05.136 "reconnect_delay_sec": 0, 00:22:05.136 "fast_io_fail_timeout_sec": 0, 00:22:05.136 "psk": "key0", 00:22:05.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.136 "hdgst": false, 00:22:05.136 "ddgst": false 00:22:05.136 } 00:22:05.136 }, 00:22:05.136 { 00:22:05.136 "method": "bdev_nvme_set_hotplug", 00:22:05.136 "params": { 00:22:05.136 "period_us": 100000, 00:22:05.136 "enable": false 00:22:05.136 } 00:22:05.136 }, 00:22:05.136 { 00:22:05.136 "method": "bdev_wait_for_examine" 00:22:05.136 } 00:22:05.136 ] 00:22:05.136 }, 00:22:05.136 { 00:22:05.136 "subsystem": "nbd", 00:22:05.136 "config": [] 00:22:05.136 } 00:22:05.136 ] 00:22:05.136 }' 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3385402 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3385402 ']' 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3385402 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3385402 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3385402' 00:22:05.136 killing process with pid 3385402 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3385402 00:22:05.136 Received shutdown signal, test time was about 10.000000 seconds 00:22:05.136 00:22:05.136 Latency(us) 00:22:05.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.136 =================================================================================================================== 00:22:05.136 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3385402 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3385000 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3385000 ']' 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3385000 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:05.136 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3385000 00:22:05.398 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:22:05.398 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:22:05.398 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3385000' 00:22:05.398 killing process with pid 3385000 00:22:05.398 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3385000 00:22:05.398 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3385000 00:22:05.398 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:05.398 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:05.398 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:05.398 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.398 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:05.398 "subsystems": [ 00:22:05.398 { 00:22:05.398 "subsystem": "keyring", 00:22:05.398 "config": [ 00:22:05.398 { 00:22:05.398 "method": "keyring_file_add_key", 00:22:05.398 "params": { 00:22:05.398 "name": "key0", 00:22:05.398 "path": "/tmp/tmp.pnUzoXj89B" 00:22:05.398 } 00:22:05.398 } 00:22:05.398 ] 00:22:05.398 }, 00:22:05.398 { 00:22:05.398 "subsystem": "iobuf", 00:22:05.398 "config": [ 00:22:05.398 { 00:22:05.398 "method": "iobuf_set_options", 00:22:05.398 "params": { 00:22:05.398 "small_pool_count": 8192, 00:22:05.398 "large_pool_count": 1024, 00:22:05.398 "small_bufsize": 8192, 00:22:05.398 "large_bufsize": 135168 00:22:05.398 } 00:22:05.398 } 00:22:05.398 ] 00:22:05.398 }, 00:22:05.398 { 00:22:05.398 "subsystem": "sock", 00:22:05.398 "config": [ 00:22:05.398 { 00:22:05.398 "method": "sock_set_default_impl", 00:22:05.398 "params": { 00:22:05.398 "impl_name": "posix" 00:22:05.398 } 00:22:05.398 }, 00:22:05.398 { 00:22:05.398 "method": "sock_impl_set_options", 00:22:05.398 "params": { 00:22:05.398 "impl_name": "ssl", 00:22:05.398 "recv_buf_size": 4096, 00:22:05.398 "send_buf_size": 4096, 00:22:05.398 "enable_recv_pipe": true, 00:22:05.398 "enable_quickack": false, 00:22:05.398 "enable_placement_id": 0, 00:22:05.398 "enable_zerocopy_send_server": true, 00:22:05.398 "enable_zerocopy_send_client": false, 00:22:05.398 "zerocopy_threshold": 0, 00:22:05.398 "tls_version": 0, 00:22:05.398 "enable_ktls": false 00:22:05.398 } 00:22:05.398 }, 00:22:05.398 { 00:22:05.398 "method": "sock_impl_set_options", 00:22:05.398 "params": { 00:22:05.398 "impl_name": "posix", 00:22:05.398 "recv_buf_size": 2097152, 00:22:05.398 "send_buf_size": 2097152, 00:22:05.398 "enable_recv_pipe": true, 00:22:05.398 "enable_quickack": false, 00:22:05.398 "enable_placement_id": 0, 00:22:05.398 "enable_zerocopy_send_server": true, 00:22:05.398 "enable_zerocopy_send_client": false, 00:22:05.398 "zerocopy_threshold": 0, 00:22:05.398 "tls_version": 0, 00:22:05.398 "enable_ktls": false 00:22:05.398 } 00:22:05.398 } 00:22:05.398 ] 00:22:05.398 }, 00:22:05.398 { 00:22:05.398 "subsystem": "vmd", 00:22:05.398 "config": [] 00:22:05.398 }, 00:22:05.398 { 00:22:05.398 "subsystem": "accel", 00:22:05.398 "config": [ 00:22:05.398 { 00:22:05.398 "method": "accel_set_options", 00:22:05.398 "params": { 00:22:05.398 "small_cache_size": 128, 00:22:05.398 "large_cache_size": 16, 00:22:05.398 "task_count": 2048, 00:22:05.398 "sequence_count": 2048, 00:22:05.398 "buf_count": 2048 00:22:05.398 } 00:22:05.398 } 00:22:05.398 ] 00:22:05.398 }, 00:22:05.398 { 00:22:05.398 "subsystem": "bdev", 00:22:05.398 "config": [ 00:22:05.398 { 00:22:05.398 "method": "bdev_set_options", 00:22:05.398 "params": { 00:22:05.398 "bdev_io_pool_size": 65535, 00:22:05.398 "bdev_io_cache_size": 256, 00:22:05.398 "bdev_auto_examine": true, 00:22:05.398 "iobuf_small_cache_size": 128, 00:22:05.398 "iobuf_large_cache_size": 16 00:22:05.398 } 00:22:05.398 }, 00:22:05.398 { 00:22:05.398 "method": "bdev_raid_set_options", 00:22:05.398 "params": { 00:22:05.398 "process_window_size_kb": 1024, 00:22:05.398 "process_max_bandwidth_mb_sec": 0 00:22:05.398 } 00:22:05.398 }, 00:22:05.398 { 00:22:05.398 "method": "bdev_iscsi_set_options", 00:22:05.398 "params": { 00:22:05.398 "timeout_sec": 30 00:22:05.398 } 00:22:05.398 }, 00:22:05.398 { 00:22:05.398 "method": "bdev_nvme_set_options", 00:22:05.398 "params": { 00:22:05.398 "action_on_timeout": "none", 00:22:05.398 "timeout_us": 0, 00:22:05.398 "timeout_admin_us": 0, 00:22:05.398 "keep_alive_timeout_ms": 10000, 00:22:05.398 "arbitration_burst": 0, 00:22:05.398 "low_priority_weight": 0, 00:22:05.398 "medium_priority_weight": 0, 00:22:05.398 "high_priority_weight": 0, 00:22:05.398 "nvme_adminq_poll_period_us": 10000, 00:22:05.398 "nvme_ioq_poll_period_us": 0, 00:22:05.398 "io_queue_requests": 0, 00:22:05.398 "delay_cmd_submit": true, 00:22:05.398 "transport_retry_count": 4, 00:22:05.398 "bdev_retry_count": 3, 00:22:05.398 "transport_ack_timeout": 0, 00:22:05.398 "ctrlr_loss_timeout_sec": 0, 00:22:05.398 "reconnect_delay_sec": 0, 00:22:05.398 "fast_io_fail_timeout_sec": 0, 00:22:05.398 "disable_auto_failback": false, 00:22:05.398 "generate_uuids": false, 00:22:05.398 "transport_tos": 0, 00:22:05.398 "nvme_error_stat": false, 00:22:05.398 "rdma_srq_size": 0, 00:22:05.398 "io_path_stat": false, 00:22:05.398 "allow_accel_sequence": false, 00:22:05.398 "rdma_max_cq_size": 0, 00:22:05.398 "rdma_cm_event_timeout_ms": 0, 00:22:05.399 "dhchap_digests": [ 00:22:05.399 "sha256", 00:22:05.399 "sha384", 00:22:05.399 "sha512" 00:22:05.399 ], 00:22:05.399 "dhchap_dhgroups": [ 00:22:05.399 "null", 00:22:05.399 "ffdhe2048", 00:22:05.399 "ffdhe3072", 00:22:05.399 "ffdhe4096", 00:22:05.399 "ffdhe6144", 00:22:05.399 "ffdhe8192" 00:22:05.399 ] 00:22:05.399 } 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "method": "bdev_nvme_set_hotplug", 00:22:05.399 "params": { 00:22:05.399 "period_us": 100000, 00:22:05.399 "enable": false 00:22:05.399 } 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "method": "bdev_malloc_create", 00:22:05.399 "params": { 00:22:05.399 "name": "malloc0", 00:22:05.399 "num_blocks": 8192, 00:22:05.399 "block_size": 4096, 00:22:05.399 "physical_block_size": 4096, 00:22:05.399 "uuid": "64f36b38-0800-49e0-88e4-d239b6bc02d0", 00:22:05.399 "optimal_io_boundary": 0, 00:22:05.399 "md_size": 0, 00:22:05.399 "dif_type": 0, 00:22:05.399 "dif_is_head_of_md": false, 00:22:05.399 "dif_pi_format": 0 00:22:05.399 } 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "method": "bdev_wait_for_examine" 00:22:05.399 } 00:22:05.399 ] 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "subsystem": "nbd", 00:22:05.399 "config": [] 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "subsystem": "scheduler", 00:22:05.399 "config": [ 00:22:05.399 { 00:22:05.399 "method": "framework_set_scheduler", 00:22:05.399 "params": { 00:22:05.399 "name": "static" 00:22:05.399 } 00:22:05.399 } 00:22:05.399 ] 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "subsystem": "nvmf", 00:22:05.399 "config": [ 00:22:05.399 { 00:22:05.399 "method": "nvmf_set_config", 00:22:05.399 "params": { 00:22:05.399 "discovery_filter": "match_any", 00:22:05.399 "admin_cmd_passthru": { 00:22:05.399 "identify_ctrlr": false 00:22:05.399 }, 00:22:05.399 "dhchap_digests": [ 00:22:05.399 "sha256", 00:22:05.399 "sha384", 00:22:05.399 "sha512" 00:22:05.399 ], 00:22:05.399 "dhchap_dhgroups": [ 00:22:05.399 "null", 00:22:05.399 "ffdhe2048", 00:22:05.399 "ffdhe3072", 00:22:05.399 "ffdhe4096", 00:22:05.399 "ffdhe6144", 00:22:05.399 "ffdhe8192" 00:22:05.399 ] 00:22:05.399 } 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "method": "nvmf_set_max_subsystems", 00:22:05.399 "params": { 00:22:05.399 "max_subsystems": 1024 00:22:05.399 } 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "method": "nvmf_set_crdt", 00:22:05.399 "params": { 00:22:05.399 "crdt1": 0, 00:22:05.399 "crdt2": 0, 00:22:05.399 "crdt3": 0 00:22:05.399 } 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "method": "nvmf_create_transport", 00:22:05.399 "params": { 00:22:05.399 "trtype": "TCP", 00:22:05.399 "max_queue_depth": 128, 00:22:05.399 "max_io_qpairs_per_ctrlr": 127, 00:22:05.399 "in_capsule_data_size": 4096, 00:22:05.399 "max_io_size": 131072, 00:22:05.399 "io_unit_size": 131072, 00:22:05.399 "max_aq_depth": 128, 00:22:05.399 "num_shared_buffers": 511, 00:22:05.399 "buf_cache_size": 4294967295, 00:22:05.399 "dif_insert_or_strip": false, 00:22:05.399 "zcopy": false, 00:22:05.399 "c2h_success": false, 00:22:05.399 "sock_priority": 0, 00:22:05.399 "abort_timeout_sec": 1, 00:22:05.399 "ack_timeout": 0, 00:22:05.399 "data_wr_pool_size": 0 00:22:05.399 } 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "method": "nvmf_create_subsystem", 00:22:05.399 "params": { 00:22:05.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.399 "allow_any_host": false, 00:22:05.399 "serial_number": "SPDK00000000000001", 00:22:05.399 "model_number": "SPDK bdev Controller", 00:22:05.399 "max_namespaces": 10, 00:22:05.399 "min_cntlid": 1, 00:22:05.399 "max_cntlid": 65519, 00:22:05.399 "ana_reporting": false 00:22:05.399 } 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "method": "nvmf_subsystem_add_host", 00:22:05.399 "params": { 00:22:05.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.399 "host": "nqn.2016-06.io.spdk:host1", 00:22:05.399 "psk": "key0" 00:22:05.399 } 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "method": "nvmf_subsystem_add_ns", 00:22:05.399 "params": { 00:22:05.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.399 "namespace": { 00:22:05.399 "nsid": 1, 00:22:05.399 "bdev_name": "malloc0", 00:22:05.399 "nguid": "64F36B38080049E088E4D239B6BC02D0", 00:22:05.399 "uuid": "64f36b38-0800-49e0-88e4-d239b6bc02d0", 00:22:05.399 "no_auto_visible": false 00:22:05.399 } 00:22:05.399 } 00:22:05.399 }, 00:22:05.399 { 00:22:05.399 "method": "nvmf_subsystem_add_listener", 00:22:05.399 "params": { 00:22:05.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.399 "listen_address": { 00:22:05.399 "trtype": "TCP", 00:22:05.399 "adrfam": "IPv4", 00:22:05.399 "traddr": "10.0.0.2", 00:22:05.399 "trsvcid": "4420" 00:22:05.399 }, 00:22:05.399 "secure_channel": true 00:22:05.399 } 00:22:05.399 } 00:22:05.399 ] 00:22:05.399 } 00:22:05.399 ] 00:22:05.399 }' 00:22:05.399 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3386023 00:22:05.399 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3386023 00:22:05.399 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:05.399 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3386023 ']' 00:22:05.399 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.399 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:05.399 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.399 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:05.399 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.399 [2024-10-07 09:43:05.001355] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:05.399 [2024-10-07 09:43:05.001408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.661 [2024-10-07 09:43:05.086040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.661 [2024-10-07 09:43:05.139298] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.661 [2024-10-07 09:43:05.139334] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.661 [2024-10-07 09:43:05.139340] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.661 [2024-10-07 09:43:05.139344] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.661 [2024-10-07 09:43:05.139348] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.661 [2024-10-07 09:43:05.139821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.921 [2024-10-07 09:43:05.343606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.921 [2024-10-07 09:43:05.375636] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.921 [2024-10-07 09:43:05.375848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.181 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:06.181 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:22:06.181 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:06.181 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@733 -- # xtrace_disable 00:22:06.181 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.181 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.442 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3386071 00:22:06.442 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3386071 /var/tmp/bdevperf.sock 00:22:06.442 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3386071 ']' 00:22:06.442 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.442 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:06.442 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.442 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:06.442 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:06.442 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.442 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:06.442 "subsystems": [ 00:22:06.442 { 00:22:06.442 "subsystem": "keyring", 00:22:06.442 "config": [ 00:22:06.442 { 00:22:06.442 "method": "keyring_file_add_key", 00:22:06.442 "params": { 00:22:06.442 "name": "key0", 00:22:06.442 "path": "/tmp/tmp.pnUzoXj89B" 00:22:06.442 } 00:22:06.442 } 00:22:06.442 ] 00:22:06.442 }, 00:22:06.442 { 00:22:06.442 "subsystem": "iobuf", 00:22:06.442 "config": [ 00:22:06.442 { 00:22:06.442 "method": "iobuf_set_options", 00:22:06.442 "params": { 00:22:06.442 "small_pool_count": 8192, 00:22:06.442 "large_pool_count": 1024, 00:22:06.442 "small_bufsize": 8192, 00:22:06.442 "large_bufsize": 135168 00:22:06.442 } 00:22:06.442 } 00:22:06.442 ] 00:22:06.442 }, 00:22:06.442 { 00:22:06.442 "subsystem": "sock", 00:22:06.442 "config": [ 00:22:06.442 { 00:22:06.442 "method": "sock_set_default_impl", 00:22:06.442 "params": { 00:22:06.442 "impl_name": "posix" 00:22:06.442 } 00:22:06.442 }, 00:22:06.442 { 00:22:06.442 "method": "sock_impl_set_options", 00:22:06.442 "params": { 00:22:06.442 "impl_name": "ssl", 00:22:06.442 "recv_buf_size": 4096, 00:22:06.442 "send_buf_size": 4096, 00:22:06.442 "enable_recv_pipe": true, 00:22:06.442 "enable_quickack": false, 00:22:06.442 "enable_placement_id": 0, 00:22:06.442 "enable_zerocopy_send_server": true, 00:22:06.442 "enable_zerocopy_send_client": false, 00:22:06.442 "zerocopy_threshold": 0, 00:22:06.442 "tls_version": 0, 00:22:06.443 "enable_ktls": false 00:22:06.443 } 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "method": "sock_impl_set_options", 00:22:06.443 "params": { 00:22:06.443 "impl_name": "posix", 00:22:06.443 "recv_buf_size": 2097152, 00:22:06.443 "send_buf_size": 2097152, 00:22:06.443 "enable_recv_pipe": true, 00:22:06.443 "enable_quickack": false, 00:22:06.443 "enable_placement_id": 0, 00:22:06.443 "enable_zerocopy_send_server": true, 00:22:06.443 "enable_zerocopy_send_client": false, 00:22:06.443 "zerocopy_threshold": 0, 00:22:06.443 "tls_version": 0, 00:22:06.443 "enable_ktls": false 00:22:06.443 } 00:22:06.443 } 00:22:06.443 ] 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "subsystem": "vmd", 00:22:06.443 "config": [] 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "subsystem": "accel", 00:22:06.443 "config": [ 00:22:06.443 { 00:22:06.443 "method": "accel_set_options", 00:22:06.443 "params": { 00:22:06.443 "small_cache_size": 128, 00:22:06.443 "large_cache_size": 16, 00:22:06.443 "task_count": 2048, 00:22:06.443 "sequence_count": 2048, 00:22:06.443 "buf_count": 2048 00:22:06.443 } 00:22:06.443 } 00:22:06.443 ] 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "subsystem": "bdev", 00:22:06.443 "config": [ 00:22:06.443 { 00:22:06.443 "method": "bdev_set_options", 00:22:06.443 "params": { 00:22:06.443 "bdev_io_pool_size": 65535, 00:22:06.443 "bdev_io_cache_size": 256, 00:22:06.443 "bdev_auto_examine": true, 00:22:06.443 "iobuf_small_cache_size": 128, 00:22:06.443 "iobuf_large_cache_size": 16 00:22:06.443 } 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "method": "bdev_raid_set_options", 00:22:06.443 "params": { 00:22:06.443 "process_window_size_kb": 1024, 00:22:06.443 "process_max_bandwidth_mb_sec": 0 00:22:06.443 } 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "method": "bdev_iscsi_set_options", 00:22:06.443 "params": { 00:22:06.443 "timeout_sec": 30 00:22:06.443 } 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "method": "bdev_nvme_set_options", 00:22:06.443 "params": { 00:22:06.443 "action_on_timeout": "none", 00:22:06.443 "timeout_us": 0, 00:22:06.443 "timeout_admin_us": 0, 00:22:06.443 "keep_alive_timeout_ms": 10000, 00:22:06.443 "arbitration_burst": 0, 00:22:06.443 "low_priority_weight": 0, 00:22:06.443 "medium_priority_weight": 0, 00:22:06.443 "high_priority_weight": 0, 00:22:06.443 "nvme_adminq_poll_period_us": 10000, 00:22:06.443 "nvme_ioq_poll_period_us": 0, 00:22:06.443 "io_queue_requests": 512, 00:22:06.443 "delay_cmd_submit": true, 00:22:06.443 "transport_retry_count": 4, 00:22:06.443 "bdev_retry_count": 3, 00:22:06.443 "transport_ack_timeout": 0, 00:22:06.443 "ctrlr_loss_timeout_sec": 0, 00:22:06.443 "reconnect_delay_sec": 0, 00:22:06.443 "fast_io_fail_timeout_sec": 0, 00:22:06.443 "disable_auto_failback": false, 00:22:06.443 "generate_uuids": false, 00:22:06.443 "transport_tos": 0, 00:22:06.443 "nvme_error_stat": false, 00:22:06.443 "rdma_srq_size": 0, 00:22:06.443 "io_path_stat": false, 00:22:06.443 "allow_accel_sequence": false, 00:22:06.443 "rdma_max_cq_size": 0, 00:22:06.443 "rdma_cm_event_timeout_ms": 0, 00:22:06.443 "dhchap_digests": [ 00:22:06.443 "sha256", 00:22:06.443 "sha384", 00:22:06.443 "sha512" 00:22:06.443 ], 00:22:06.443 "dhchap_dhgroups": [ 00:22:06.443 "null", 00:22:06.443 "ffdhe2048", 00:22:06.443 "ffdhe3072", 00:22:06.443 "ffdhe4096", 00:22:06.443 "ffdhe6144", 00:22:06.443 "ffdhe8192" 00:22:06.443 ] 00:22:06.443 } 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "method": "bdev_nvme_attach_controller", 00:22:06.443 "params": { 00:22:06.443 "name": "TLSTEST", 00:22:06.443 "trtype": "TCP", 00:22:06.443 "adrfam": "IPv4", 00:22:06.443 "traddr": "10.0.0.2", 00:22:06.443 "trsvcid": "4420", 00:22:06.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.443 "prchk_reftag": false, 00:22:06.443 "prchk_guard": false, 00:22:06.443 "ctrlr_loss_timeout_sec": 0, 00:22:06.443 "reconnect_delay_sec": 0, 00:22:06.443 "fast_io_fail_timeout_sec": 0, 00:22:06.443 "psk": "key0", 00:22:06.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.443 "hdgst": false, 00:22:06.443 "ddgst": false 00:22:06.443 } 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "method": "bdev_nvme_set_hotplug", 00:22:06.443 "params": { 00:22:06.443 "period_us": 100000, 00:22:06.443 "enable": false 00:22:06.443 } 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "method": "bdev_wait_for_examine" 00:22:06.443 } 00:22:06.443 ] 00:22:06.443 }, 00:22:06.443 { 00:22:06.443 "subsystem": "nbd", 00:22:06.443 "config": [] 00:22:06.443 } 00:22:06.443 ] 00:22:06.443 }' 00:22:06.443 [2024-10-07 09:43:05.891686] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:06.443 [2024-10-07 09:43:05.891739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386071 ] 00:22:06.443 [2024-10-07 09:43:05.969129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.443 [2024-10-07 09:43:06.032053] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.704 [2024-10-07 09:43:06.171374] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.277 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:07.277 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:22:07.277 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:07.277 Running I/O for 10 seconds... 00:22:17.194 5411.00 IOPS, 21.14 MiB/s 5306.00 IOPS, 20.73 MiB/s 5599.67 IOPS, 21.87 MiB/s 5766.00 IOPS, 22.52 MiB/s 5920.60 IOPS, 23.13 MiB/s 5755.00 IOPS, 22.48 MiB/s 5810.43 IOPS, 22.70 MiB/s 5803.00 IOPS, 22.67 MiB/s 5686.44 IOPS, 22.21 MiB/s 5640.90 IOPS, 22.03 MiB/s 00:22:17.194 Latency(us) 00:22:17.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.194 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:17.194 Verification LBA range: start 0x0 length 0x2000 00:22:17.194 TLSTESTn1 : 10.02 5640.91 22.03 0.00 0.00 22651.82 5952.85 28398.93 00:22:17.194 =================================================================================================================== 00:22:17.194 Total : 5640.91 22.03 0.00 0.00 22651.82 5952.85 28398.93 00:22:17.194 { 00:22:17.194 "results": [ 00:22:17.194 { 00:22:17.194 "job": "TLSTESTn1", 00:22:17.194 "core_mask": "0x4", 00:22:17.194 "workload": "verify", 00:22:17.194 "status": "finished", 00:22:17.194 "verify_range": { 00:22:17.194 "start": 0, 00:22:17.194 "length": 8192 00:22:17.194 }, 00:22:17.194 "queue_depth": 128, 00:22:17.194 "io_size": 4096, 00:22:17.194 "runtime": 10.02267, 00:22:17.194 "iops": 5640.912052377261, 00:22:17.194 "mibps": 22.034812704598675, 00:22:17.194 "io_failed": 0, 00:22:17.194 "io_timeout": 0, 00:22:17.194 "avg_latency_us": 22651.81747881918, 00:22:17.194 "min_latency_us": 5952.8533333333335, 00:22:17.194 "max_latency_us": 28398.933333333334 00:22:17.194 } 00:22:17.194 ], 00:22:17.194 "core_count": 1 00:22:17.194 } 00:22:17.194 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.194 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3386071 00:22:17.194 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3386071 ']' 00:22:17.194 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3386071 00:22:17.194 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:17.194 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:17.194 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3386071 00:22:17.454 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:22:17.454 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:22:17.454 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3386071' 00:22:17.454 killing process with pid 3386071 00:22:17.454 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3386071 00:22:17.454 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.454 00:22:17.454 Latency(us) 00:22:17.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.454 =================================================================================================================== 00:22:17.454 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.454 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3386071 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3386023 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3386023 ']' 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3386023 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3386023 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3386023' 00:22:17.454 killing process with pid 3386023 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3386023 00:22:17.454 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3386023 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3388408 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3388408 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3388408 ']' 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:17.715 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.715 [2024-10-07 09:43:17.281984] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:17.715 [2024-10-07 09:43:17.282038] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.715 [2024-10-07 09:43:17.365704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.976 [2024-10-07 09:43:17.444414] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.976 [2024-10-07 09:43:17.444476] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.976 [2024-10-07 09:43:17.444484] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.976 [2024-10-07 09:43:17.444492] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.976 [2024-10-07 09:43:17.444498] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.976 [2024-10-07 09:43:17.445262] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.548 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:18.548 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:22:18.548 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:18.548 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@733 -- # xtrace_disable 00:22:18.548 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.548 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.548 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.pnUzoXj89B 00:22:18.548 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pnUzoXj89B 00:22:18.548 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.809 [2024-10-07 09:43:18.307631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.809 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:19.070 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.070 [2024-10-07 09:43:18.676572] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.070 [2024-10-07 09:43:18.676932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.070 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.330 malloc0 00:22:19.330 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.590 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pnUzoXj89B 00:22:19.590 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:19.849 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3388775 00:22:19.849 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:19.849 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:19.849 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3388775 /var/tmp/bdevperf.sock 00:22:19.849 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3388775 ']' 00:22:19.849 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.849 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:19.849 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.850 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:19.850 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.850 [2024-10-07 09:43:19.486198] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:19.850 [2024-10-07 09:43:19.486277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388775 ] 00:22:20.110 [2024-10-07 09:43:19.568529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.110 [2024-10-07 09:43:19.629516] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.682 09:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:20.682 09:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:22:20.682 09:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pnUzoXj89B 00:22:20.941 09:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:20.941 [2024-10-07 09:43:20.598219] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.201 nvme0n1 00:22:21.201 09:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:21.201 Running I/O for 1 seconds... 00:22:22.145 4480.00 IOPS, 17.50 MiB/s 00:22:22.145 Latency(us) 00:22:22.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.146 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:22.146 Verification LBA range: start 0x0 length 0x2000 00:22:22.146 nvme0n1 : 1.02 4499.50 17.58 0.00 0.00 28182.08 5652.48 41287.68 00:22:22.146 =================================================================================================================== 00:22:22.146 Total : 4499.50 17.58 0.00 0.00 28182.08 5652.48 41287.68 00:22:22.146 { 00:22:22.146 "results": [ 00:22:22.146 { 00:22:22.146 "job": "nvme0n1", 00:22:22.146 "core_mask": "0x2", 00:22:22.146 "workload": "verify", 00:22:22.146 "status": "finished", 00:22:22.146 "verify_range": { 00:22:22.146 "start": 0, 00:22:22.146 "length": 8192 00:22:22.146 }, 00:22:22.146 "queue_depth": 128, 00:22:22.146 "io_size": 4096, 00:22:22.146 "runtime": 1.024335, 00:22:22.146 "iops": 4499.504556614779, 00:22:22.146 "mibps": 17.57618967427648, 00:22:22.146 "io_failed": 0, 00:22:22.146 "io_timeout": 0, 00:22:22.146 "avg_latency_us": 28182.076690533013, 00:22:22.146 "min_latency_us": 5652.48, 00:22:22.146 "max_latency_us": 41287.68 00:22:22.146 } 00:22:22.146 ], 00:22:22.146 "core_count": 1 00:22:22.146 } 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3388775 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3388775 ']' 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3388775 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3388775 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3388775' 00:22:22.407 killing process with pid 3388775 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3388775 00:22:22.407 Received shutdown signal, test time was about 1.000000 seconds 00:22:22.407 00:22:22.407 Latency(us) 00:22:22.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.407 =================================================================================================================== 00:22:22.407 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.407 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3388775 00:22:22.407 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3388408 00:22:22.407 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3388408 ']' 00:22:22.407 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3388408 00:22:22.407 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:22.407 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:22.407 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3388408 00:22:22.668 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:22:22.668 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:22:22.668 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3388408' 00:22:22.668 killing process with pid 3388408 00:22:22.668 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3388408 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3388408 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3389428 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3389428 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3389428 ']' 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:22.669 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.669 [2024-10-07 09:43:22.264014] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:22.669 [2024-10-07 09:43:22.264075] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.929 [2024-10-07 09:43:22.346492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.929 [2024-10-07 09:43:22.400542] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.929 [2024-10-07 09:43:22.400577] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.929 [2024-10-07 09:43:22.400583] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.929 [2024-10-07 09:43:22.400587] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.929 [2024-10-07 09:43:22.400592] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.929 [2024-10-07 09:43:22.401066] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@733 -- # xtrace_disable 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.499 [2024-10-07 09:43:23.092538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.499 malloc0 00:22:23.499 [2024-10-07 09:43:23.126585] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.499 [2024-10-07 09:43:23.126787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3389489 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3389489 /var/tmp/bdevperf.sock 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3389489 ']' 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:23.499 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.760 [2024-10-07 09:43:23.216535] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:23.760 [2024-10-07 09:43:23.216590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389489 ] 00:22:23.760 [2024-10-07 09:43:23.294672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.760 [2024-10-07 09:43:23.348573] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.332 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:24.332 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:22:24.332 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pnUzoXj89B 00:22:24.592 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:24.854 [2024-10-07 09:43:24.280246] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.854 nvme0n1 00:22:24.854 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.854 Running I/O for 1 seconds... 00:22:26.239 5675.00 IOPS, 22.17 MiB/s 00:22:26.239 Latency(us) 00:22:26.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.239 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:26.239 Verification LBA range: start 0x0 length 0x2000 00:22:26.239 nvme0n1 : 1.01 5727.35 22.37 0.00 0.00 22207.12 4751.36 25886.72 00:22:26.239 =================================================================================================================== 00:22:26.239 Total : 5727.35 22.37 0.00 0.00 22207.12 4751.36 25886.72 00:22:26.239 { 00:22:26.239 "results": [ 00:22:26.239 { 00:22:26.239 "job": "nvme0n1", 00:22:26.239 "core_mask": "0x2", 00:22:26.239 "workload": "verify", 00:22:26.239 "status": "finished", 00:22:26.239 "verify_range": { 00:22:26.239 "start": 0, 00:22:26.239 "length": 8192 00:22:26.239 }, 00:22:26.239 "queue_depth": 128, 00:22:26.239 "io_size": 4096, 00:22:26.239 "runtime": 1.013384, 00:22:26.239 "iops": 5727.345211686784, 00:22:26.239 "mibps": 22.3724422331515, 00:22:26.239 "io_failed": 0, 00:22:26.239 "io_timeout": 0, 00:22:26.239 "avg_latency_us": 22207.123142660235, 00:22:26.239 "min_latency_us": 4751.36, 00:22:26.239 "max_latency_us": 25886.72 00:22:26.239 } 00:22:26.239 ], 00:22:26.239 "core_count": 1 00:22:26.239 } 00:22:26.239 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:26.239 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:26.239 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.239 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:26.239 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:26.239 "subsystems": [ 00:22:26.239 { 00:22:26.239 "subsystem": "keyring", 00:22:26.239 "config": [ 00:22:26.239 { 00:22:26.239 "method": "keyring_file_add_key", 00:22:26.239 "params": { 00:22:26.239 "name": "key0", 00:22:26.239 "path": "/tmp/tmp.pnUzoXj89B" 00:22:26.239 } 00:22:26.239 } 00:22:26.239 ] 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "subsystem": "iobuf", 00:22:26.239 "config": [ 00:22:26.239 { 00:22:26.239 "method": "iobuf_set_options", 00:22:26.239 "params": { 00:22:26.239 "small_pool_count": 8192, 00:22:26.239 "large_pool_count": 1024, 00:22:26.239 "small_bufsize": 8192, 00:22:26.239 "large_bufsize": 135168 00:22:26.239 } 00:22:26.239 } 00:22:26.239 ] 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "subsystem": "sock", 00:22:26.239 "config": [ 00:22:26.239 { 00:22:26.239 "method": "sock_set_default_impl", 00:22:26.239 "params": { 00:22:26.239 "impl_name": "posix" 00:22:26.239 } 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "method": "sock_impl_set_options", 00:22:26.239 "params": { 00:22:26.239 "impl_name": "ssl", 00:22:26.239 "recv_buf_size": 4096, 00:22:26.239 "send_buf_size": 4096, 00:22:26.239 "enable_recv_pipe": true, 00:22:26.239 "enable_quickack": false, 00:22:26.239 "enable_placement_id": 0, 00:22:26.239 "enable_zerocopy_send_server": true, 00:22:26.239 "enable_zerocopy_send_client": false, 00:22:26.239 "zerocopy_threshold": 0, 00:22:26.239 "tls_version": 0, 00:22:26.239 "enable_ktls": false 00:22:26.239 } 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "method": "sock_impl_set_options", 00:22:26.239 "params": { 00:22:26.239 "impl_name": "posix", 00:22:26.239 "recv_buf_size": 2097152, 00:22:26.239 "send_buf_size": 2097152, 00:22:26.239 "enable_recv_pipe": true, 00:22:26.239 "enable_quickack": false, 00:22:26.239 "enable_placement_id": 0, 00:22:26.239 "enable_zerocopy_send_server": true, 00:22:26.239 "enable_zerocopy_send_client": false, 00:22:26.239 "zerocopy_threshold": 0, 00:22:26.239 "tls_version": 0, 00:22:26.239 "enable_ktls": false 00:22:26.239 } 00:22:26.239 } 00:22:26.239 ] 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "subsystem": "vmd", 00:22:26.239 "config": [] 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "subsystem": "accel", 00:22:26.239 "config": [ 00:22:26.239 { 00:22:26.239 "method": "accel_set_options", 00:22:26.239 "params": { 00:22:26.239 "small_cache_size": 128, 00:22:26.239 "large_cache_size": 16, 00:22:26.239 "task_count": 2048, 00:22:26.239 "sequence_count": 2048, 00:22:26.239 "buf_count": 2048 00:22:26.239 } 00:22:26.239 } 00:22:26.239 ] 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "subsystem": "bdev", 00:22:26.239 "config": [ 00:22:26.239 { 00:22:26.239 "method": "bdev_set_options", 00:22:26.239 "params": { 00:22:26.239 "bdev_io_pool_size": 65535, 00:22:26.239 "bdev_io_cache_size": 256, 00:22:26.239 "bdev_auto_examine": true, 00:22:26.239 "iobuf_small_cache_size": 128, 00:22:26.239 "iobuf_large_cache_size": 16 00:22:26.239 } 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "method": "bdev_raid_set_options", 00:22:26.239 "params": { 00:22:26.239 "process_window_size_kb": 1024, 00:22:26.239 "process_max_bandwidth_mb_sec": 0 00:22:26.239 } 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "method": "bdev_iscsi_set_options", 00:22:26.239 "params": { 00:22:26.239 "timeout_sec": 30 00:22:26.239 } 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "method": "bdev_nvme_set_options", 00:22:26.239 "params": { 00:22:26.239 "action_on_timeout": "none", 00:22:26.239 "timeout_us": 0, 00:22:26.239 "timeout_admin_us": 0, 00:22:26.239 "keep_alive_timeout_ms": 10000, 00:22:26.239 "arbitration_burst": 0, 00:22:26.239 "low_priority_weight": 0, 00:22:26.239 "medium_priority_weight": 0, 00:22:26.239 "high_priority_weight": 0, 00:22:26.239 "nvme_adminq_poll_period_us": 10000, 00:22:26.239 "nvme_ioq_poll_period_us": 0, 00:22:26.239 "io_queue_requests": 0, 00:22:26.239 "delay_cmd_submit": true, 00:22:26.239 "transport_retry_count": 4, 00:22:26.239 "bdev_retry_count": 3, 00:22:26.239 "transport_ack_timeout": 0, 00:22:26.239 "ctrlr_loss_timeout_sec": 0, 00:22:26.239 "reconnect_delay_sec": 0, 00:22:26.239 "fast_io_fail_timeout_sec": 0, 00:22:26.239 "disable_auto_failback": false, 00:22:26.239 "generate_uuids": false, 00:22:26.239 "transport_tos": 0, 00:22:26.239 "nvme_error_stat": false, 00:22:26.239 "rdma_srq_size": 0, 00:22:26.239 "io_path_stat": false, 00:22:26.239 "allow_accel_sequence": false, 00:22:26.239 "rdma_max_cq_size": 0, 00:22:26.239 "rdma_cm_event_timeout_ms": 0, 00:22:26.239 "dhchap_digests": [ 00:22:26.239 "sha256", 00:22:26.239 "sha384", 00:22:26.239 "sha512" 00:22:26.239 ], 00:22:26.239 "dhchap_dhgroups": [ 00:22:26.239 "null", 00:22:26.239 "ffdhe2048", 00:22:26.239 "ffdhe3072", 00:22:26.239 "ffdhe4096", 00:22:26.239 "ffdhe6144", 00:22:26.239 "ffdhe8192" 00:22:26.239 ] 00:22:26.239 } 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "method": "bdev_nvme_set_hotplug", 00:22:26.239 "params": { 00:22:26.239 "period_us": 100000, 00:22:26.239 "enable": false 00:22:26.239 } 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "method": "bdev_malloc_create", 00:22:26.239 "params": { 00:22:26.239 "name": "malloc0", 00:22:26.239 "num_blocks": 8192, 00:22:26.239 "block_size": 4096, 00:22:26.239 "physical_block_size": 4096, 00:22:26.239 "uuid": "d4a82356-b7db-4695-b927-42e7928f5e23", 00:22:26.239 "optimal_io_boundary": 0, 00:22:26.239 "md_size": 0, 00:22:26.239 "dif_type": 0, 00:22:26.239 "dif_is_head_of_md": false, 00:22:26.239 "dif_pi_format": 0 00:22:26.239 } 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "method": "bdev_wait_for_examine" 00:22:26.239 } 00:22:26.239 ] 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "subsystem": "nbd", 00:22:26.239 "config": [] 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "subsystem": "scheduler", 00:22:26.239 "config": [ 00:22:26.239 { 00:22:26.239 "method": "framework_set_scheduler", 00:22:26.239 "params": { 00:22:26.239 "name": "static" 00:22:26.239 } 00:22:26.239 } 00:22:26.239 ] 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "subsystem": "nvmf", 00:22:26.239 "config": [ 00:22:26.239 { 00:22:26.239 "method": "nvmf_set_config", 00:22:26.239 "params": { 00:22:26.239 "discovery_filter": "match_any", 00:22:26.239 "admin_cmd_passthru": { 00:22:26.239 "identify_ctrlr": false 00:22:26.239 }, 00:22:26.239 "dhchap_digests": [ 00:22:26.239 "sha256", 00:22:26.239 "sha384", 00:22:26.239 "sha512" 00:22:26.239 ], 00:22:26.239 "dhchap_dhgroups": [ 00:22:26.239 "null", 00:22:26.239 "ffdhe2048", 00:22:26.239 "ffdhe3072", 00:22:26.239 "ffdhe4096", 00:22:26.239 "ffdhe6144", 00:22:26.239 "ffdhe8192" 00:22:26.239 ] 00:22:26.239 } 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "method": "nvmf_set_max_subsystems", 00:22:26.239 "params": { 00:22:26.239 "max_subsystems": 1024 00:22:26.239 } 00:22:26.239 }, 00:22:26.239 { 00:22:26.239 "method": "nvmf_set_crdt", 00:22:26.239 "params": { 00:22:26.240 "crdt1": 0, 00:22:26.240 "crdt2": 0, 00:22:26.240 "crdt3": 0 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "nvmf_create_transport", 00:22:26.240 "params": { 00:22:26.240 "trtype": "TCP", 00:22:26.240 "max_queue_depth": 128, 00:22:26.240 "max_io_qpairs_per_ctrlr": 127, 00:22:26.240 "in_capsule_data_size": 4096, 00:22:26.240 "max_io_size": 131072, 00:22:26.240 "io_unit_size": 131072, 00:22:26.240 "max_aq_depth": 128, 00:22:26.240 "num_shared_buffers": 511, 00:22:26.240 "buf_cache_size": 4294967295, 00:22:26.240 "dif_insert_or_strip": false, 00:22:26.240 "zcopy": false, 00:22:26.240 "c2h_success": false, 00:22:26.240 "sock_priority": 0, 00:22:26.240 "abort_timeout_sec": 1, 00:22:26.240 "ack_timeout": 0, 00:22:26.240 "data_wr_pool_size": 0 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "nvmf_create_subsystem", 00:22:26.240 "params": { 00:22:26.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.240 "allow_any_host": false, 00:22:26.240 "serial_number": "00000000000000000000", 00:22:26.240 "model_number": "SPDK bdev Controller", 00:22:26.240 "max_namespaces": 32, 00:22:26.240 "min_cntlid": 1, 00:22:26.240 "max_cntlid": 65519, 00:22:26.240 "ana_reporting": false 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "nvmf_subsystem_add_host", 00:22:26.240 "params": { 00:22:26.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.240 "host": "nqn.2016-06.io.spdk:host1", 00:22:26.240 "psk": "key0" 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "nvmf_subsystem_add_ns", 00:22:26.240 "params": { 00:22:26.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.240 "namespace": { 00:22:26.240 "nsid": 1, 00:22:26.240 "bdev_name": "malloc0", 00:22:26.240 "nguid": "D4A82356B7DB4695B92742E7928F5E23", 00:22:26.240 "uuid": "d4a82356-b7db-4695-b927-42e7928f5e23", 00:22:26.240 "no_auto_visible": false 00:22:26.240 } 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "nvmf_subsystem_add_listener", 00:22:26.240 "params": { 00:22:26.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.240 "listen_address": { 00:22:26.240 "trtype": "TCP", 00:22:26.240 "adrfam": "IPv4", 00:22:26.240 "traddr": "10.0.0.2", 00:22:26.240 "trsvcid": "4420" 00:22:26.240 }, 00:22:26.240 "secure_channel": false, 00:22:26.240 "sock_impl": "ssl" 00:22:26.240 } 00:22:26.240 } 00:22:26.240 ] 00:22:26.240 } 00:22:26.240 ] 00:22:26.240 }' 00:22:26.240 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:26.240 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:26.240 "subsystems": [ 00:22:26.240 { 00:22:26.240 "subsystem": "keyring", 00:22:26.240 "config": [ 00:22:26.240 { 00:22:26.240 "method": "keyring_file_add_key", 00:22:26.240 "params": { 00:22:26.240 "name": "key0", 00:22:26.240 "path": "/tmp/tmp.pnUzoXj89B" 00:22:26.240 } 00:22:26.240 } 00:22:26.240 ] 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "subsystem": "iobuf", 00:22:26.240 "config": [ 00:22:26.240 { 00:22:26.240 "method": "iobuf_set_options", 00:22:26.240 "params": { 00:22:26.240 "small_pool_count": 8192, 00:22:26.240 "large_pool_count": 1024, 00:22:26.240 "small_bufsize": 8192, 00:22:26.240 "large_bufsize": 135168 00:22:26.240 } 00:22:26.240 } 00:22:26.240 ] 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "subsystem": "sock", 00:22:26.240 "config": [ 00:22:26.240 { 00:22:26.240 "method": "sock_set_default_impl", 00:22:26.240 "params": { 00:22:26.240 "impl_name": "posix" 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "sock_impl_set_options", 00:22:26.240 "params": { 00:22:26.240 "impl_name": "ssl", 00:22:26.240 "recv_buf_size": 4096, 00:22:26.240 "send_buf_size": 4096, 00:22:26.240 "enable_recv_pipe": true, 00:22:26.240 "enable_quickack": false, 00:22:26.240 "enable_placement_id": 0, 00:22:26.240 "enable_zerocopy_send_server": true, 00:22:26.240 "enable_zerocopy_send_client": false, 00:22:26.240 "zerocopy_threshold": 0, 00:22:26.240 "tls_version": 0, 00:22:26.240 "enable_ktls": false 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "sock_impl_set_options", 00:22:26.240 "params": { 00:22:26.240 "impl_name": "posix", 00:22:26.240 "recv_buf_size": 2097152, 00:22:26.240 "send_buf_size": 2097152, 00:22:26.240 "enable_recv_pipe": true, 00:22:26.240 "enable_quickack": false, 00:22:26.240 "enable_placement_id": 0, 00:22:26.240 "enable_zerocopy_send_server": true, 00:22:26.240 "enable_zerocopy_send_client": false, 00:22:26.240 "zerocopy_threshold": 0, 00:22:26.240 "tls_version": 0, 00:22:26.240 "enable_ktls": false 00:22:26.240 } 00:22:26.240 } 00:22:26.240 ] 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "subsystem": "vmd", 00:22:26.240 "config": [] 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "subsystem": "accel", 00:22:26.240 "config": [ 00:22:26.240 { 00:22:26.240 "method": "accel_set_options", 00:22:26.240 "params": { 00:22:26.240 "small_cache_size": 128, 00:22:26.240 "large_cache_size": 16, 00:22:26.240 "task_count": 2048, 00:22:26.240 "sequence_count": 2048, 00:22:26.240 "buf_count": 2048 00:22:26.240 } 00:22:26.240 } 00:22:26.240 ] 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "subsystem": "bdev", 00:22:26.240 "config": [ 00:22:26.240 { 00:22:26.240 "method": "bdev_set_options", 00:22:26.240 "params": { 00:22:26.240 "bdev_io_pool_size": 65535, 00:22:26.240 "bdev_io_cache_size": 256, 00:22:26.240 "bdev_auto_examine": true, 00:22:26.240 "iobuf_small_cache_size": 128, 00:22:26.240 "iobuf_large_cache_size": 16 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "bdev_raid_set_options", 00:22:26.240 "params": { 00:22:26.240 "process_window_size_kb": 1024, 00:22:26.240 "process_max_bandwidth_mb_sec": 0 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "bdev_iscsi_set_options", 00:22:26.240 "params": { 00:22:26.240 "timeout_sec": 30 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "bdev_nvme_set_options", 00:22:26.240 "params": { 00:22:26.240 "action_on_timeout": "none", 00:22:26.240 "timeout_us": 0, 00:22:26.240 "timeout_admin_us": 0, 00:22:26.240 "keep_alive_timeout_ms": 10000, 00:22:26.240 "arbitration_burst": 0, 00:22:26.240 "low_priority_weight": 0, 00:22:26.240 "medium_priority_weight": 0, 00:22:26.240 "high_priority_weight": 0, 00:22:26.240 "nvme_adminq_poll_period_us": 10000, 00:22:26.240 "nvme_ioq_poll_period_us": 0, 00:22:26.240 "io_queue_requests": 512, 00:22:26.240 "delay_cmd_submit": true, 00:22:26.240 "transport_retry_count": 4, 00:22:26.240 "bdev_retry_count": 3, 00:22:26.240 "transport_ack_timeout": 0, 00:22:26.240 "ctrlr_loss_timeout_sec": 0, 00:22:26.240 "reconnect_delay_sec": 0, 00:22:26.240 "fast_io_fail_timeout_sec": 0, 00:22:26.240 "disable_auto_failback": false, 00:22:26.240 "generate_uuids": false, 00:22:26.240 "transport_tos": 0, 00:22:26.240 "nvme_error_stat": false, 00:22:26.240 "rdma_srq_size": 0, 00:22:26.240 "io_path_stat": false, 00:22:26.240 "allow_accel_sequence": false, 00:22:26.240 "rdma_max_cq_size": 0, 00:22:26.240 "rdma_cm_event_timeout_ms": 0, 00:22:26.240 "dhchap_digests": [ 00:22:26.240 "sha256", 00:22:26.240 "sha384", 00:22:26.240 "sha512" 00:22:26.240 ], 00:22:26.240 "dhchap_dhgroups": [ 00:22:26.240 "null", 00:22:26.240 "ffdhe2048", 00:22:26.240 "ffdhe3072", 00:22:26.240 "ffdhe4096", 00:22:26.240 "ffdhe6144", 00:22:26.240 "ffdhe8192" 00:22:26.240 ] 00:22:26.240 } 00:22:26.240 }, 00:22:26.240 { 00:22:26.240 "method": "bdev_nvme_attach_controller", 00:22:26.240 "params": { 00:22:26.240 "name": "nvme0", 00:22:26.240 "trtype": "TCP", 00:22:26.240 "adrfam": "IPv4", 00:22:26.240 "traddr": "10.0.0.2", 00:22:26.240 "trsvcid": "4420", 00:22:26.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.240 "prchk_reftag": false, 00:22:26.240 "prchk_guard": false, 00:22:26.241 "ctrlr_loss_timeout_sec": 0, 00:22:26.241 "reconnect_delay_sec": 0, 00:22:26.241 "fast_io_fail_timeout_sec": 0, 00:22:26.241 "psk": "key0", 00:22:26.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.241 "hdgst": false, 00:22:26.241 "ddgst": false 00:22:26.241 } 00:22:26.241 }, 00:22:26.241 { 00:22:26.241 "method": "bdev_nvme_set_hotplug", 00:22:26.241 "params": { 00:22:26.241 "period_us": 100000, 00:22:26.241 "enable": false 00:22:26.241 } 00:22:26.241 }, 00:22:26.241 { 00:22:26.241 "method": "bdev_enable_histogram", 00:22:26.241 "params": { 00:22:26.241 "name": "nvme0n1", 00:22:26.241 "enable": true 00:22:26.241 } 00:22:26.241 }, 00:22:26.241 { 00:22:26.241 "method": "bdev_wait_for_examine" 00:22:26.241 } 00:22:26.241 ] 00:22:26.241 }, 00:22:26.241 { 00:22:26.241 "subsystem": "nbd", 00:22:26.241 "config": [] 00:22:26.241 } 00:22:26.241 ] 00:22:26.241 }' 00:22:26.241 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3389489 00:22:26.241 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3389489 ']' 00:22:26.241 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3389489 00:22:26.241 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:26.241 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:26.241 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3389489 00:22:26.502 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:22:26.502 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:22:26.502 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3389489' 00:22:26.502 killing process with pid 3389489 00:22:26.502 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3389489 00:22:26.502 Received shutdown signal, test time was about 1.000000 seconds 00:22:26.502 00:22:26.502 Latency(us) 00:22:26.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.502 =================================================================================================================== 00:22:26.502 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.502 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3389489 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3389428 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3389428 ']' 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3389428 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3389428 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3389428' 00:22:26.502 killing process with pid 3389428 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3389428 00:22:26.502 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3389428 00:22:26.763 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:26.763 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:26.763 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:26.763 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:26.763 "subsystems": [ 00:22:26.763 { 00:22:26.763 "subsystem": "keyring", 00:22:26.763 "config": [ 00:22:26.763 { 00:22:26.763 "method": "keyring_file_add_key", 00:22:26.763 "params": { 00:22:26.763 "name": "key0", 00:22:26.763 "path": "/tmp/tmp.pnUzoXj89B" 00:22:26.763 } 00:22:26.763 } 00:22:26.763 ] 00:22:26.763 }, 00:22:26.763 { 00:22:26.763 "subsystem": "iobuf", 00:22:26.763 "config": [ 00:22:26.763 { 00:22:26.763 "method": "iobuf_set_options", 00:22:26.763 "params": { 00:22:26.763 "small_pool_count": 8192, 00:22:26.763 "large_pool_count": 1024, 00:22:26.763 "small_bufsize": 8192, 00:22:26.763 "large_bufsize": 135168 00:22:26.763 } 00:22:26.763 } 00:22:26.763 ] 00:22:26.763 }, 00:22:26.763 { 00:22:26.763 "subsystem": "sock", 00:22:26.763 "config": [ 00:22:26.763 { 00:22:26.763 "method": "sock_set_default_impl", 00:22:26.763 "params": { 00:22:26.763 "impl_name": "posix" 00:22:26.763 } 00:22:26.763 }, 00:22:26.763 { 00:22:26.763 "method": "sock_impl_set_options", 00:22:26.763 "params": { 00:22:26.763 "impl_name": "ssl", 00:22:26.763 "recv_buf_size": 4096, 00:22:26.763 "send_buf_size": 4096, 00:22:26.763 "enable_recv_pipe": true, 00:22:26.763 "enable_quickack": false, 00:22:26.763 "enable_placement_id": 0, 00:22:26.763 "enable_zerocopy_send_server": true, 00:22:26.763 "enable_zerocopy_send_client": false, 00:22:26.763 "zerocopy_threshold": 0, 00:22:26.763 "tls_version": 0, 00:22:26.763 "enable_ktls": false 00:22:26.763 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "sock_impl_set_options", 00:22:26.764 "params": { 00:22:26.764 "impl_name": "posix", 00:22:26.764 "recv_buf_size": 2097152, 00:22:26.764 "send_buf_size": 2097152, 00:22:26.764 "enable_recv_pipe": true, 00:22:26.764 "enable_quickack": false, 00:22:26.764 "enable_placement_id": 0, 00:22:26.764 "enable_zerocopy_send_server": true, 00:22:26.764 "enable_zerocopy_send_client": false, 00:22:26.764 "zerocopy_threshold": 0, 00:22:26.764 "tls_version": 0, 00:22:26.764 "enable_ktls": false 00:22:26.764 } 00:22:26.764 } 00:22:26.764 ] 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "subsystem": "vmd", 00:22:26.764 "config": [] 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "subsystem": "accel", 00:22:26.764 "config": [ 00:22:26.764 { 00:22:26.764 "method": "accel_set_options", 00:22:26.764 "params": { 00:22:26.764 "small_cache_size": 128, 00:22:26.764 "large_cache_size": 16, 00:22:26.764 "task_count": 2048, 00:22:26.764 "sequence_count": 2048, 00:22:26.764 "buf_count": 2048 00:22:26.764 } 00:22:26.764 } 00:22:26.764 ] 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "subsystem": "bdev", 00:22:26.764 "config": [ 00:22:26.764 { 00:22:26.764 "method": "bdev_set_options", 00:22:26.764 "params": { 00:22:26.764 "bdev_io_pool_size": 65535, 00:22:26.764 "bdev_io_cache_size": 256, 00:22:26.764 "bdev_auto_examine": true, 00:22:26.764 "iobuf_small_cache_size": 128, 00:22:26.764 "iobuf_large_cache_size": 16 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "bdev_raid_set_options", 00:22:26.764 "params": { 00:22:26.764 "process_window_size_kb": 1024, 00:22:26.764 "process_max_bandwidth_mb_sec": 0 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "bdev_iscsi_set_options", 00:22:26.764 "params": { 00:22:26.764 "timeout_sec": 30 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "bdev_nvme_set_options", 00:22:26.764 "params": { 00:22:26.764 "action_on_timeout": "none", 00:22:26.764 "timeout_us": 0, 00:22:26.764 "timeout_admin_us": 0, 00:22:26.764 "keep_alive_timeout_ms": 10000, 00:22:26.764 "arbitration_burst": 0, 00:22:26.764 "low_priority_weight": 0, 00:22:26.764 "medium_priority_weight": 0, 00:22:26.764 "high_priority_weight": 0, 00:22:26.764 "nvme_adminq_poll_period_us": 10000, 00:22:26.764 "nvme_ioq_poll_period_us": 0, 00:22:26.764 "io_queue_requests": 0, 00:22:26.764 "delay_cmd_submit": true, 00:22:26.764 "transport_retry_count": 4, 00:22:26.764 "bdev_retry_count": 3, 00:22:26.764 "transport_ack_timeout": 0, 00:22:26.764 "ctrlr_loss_timeout_sec": 0, 00:22:26.764 "reconnect_delay_sec": 0, 00:22:26.764 "fast_io_fail_timeout_sec": 0, 00:22:26.764 "disable_auto_failback": false, 00:22:26.764 "generate_uuids": false, 00:22:26.764 "transport_tos": 0, 00:22:26.764 "nvme_error_stat": false, 00:22:26.764 "rdma_srq_size": 0, 00:22:26.764 "io_path_stat": false, 00:22:26.764 "allow_accel_sequence": false, 00:22:26.764 "rdma_max_cq_size": 0, 00:22:26.764 "rdma_cm_event_timeout_ms": 0, 00:22:26.764 "dhchap_digests": [ 00:22:26.764 "sha256", 00:22:26.764 "sha384", 00:22:26.764 "sha512" 00:22:26.764 ], 00:22:26.764 "dhchap_dhgroups": [ 00:22:26.764 "null", 00:22:26.764 "ffdhe2048", 00:22:26.764 "ffdhe3072", 00:22:26.764 "ffdhe4096", 00:22:26.764 "ffdhe6144", 00:22:26.764 "ffdhe8192" 00:22:26.764 ] 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "bdev_nvme_set_hotplug", 00:22:26.764 "params": { 00:22:26.764 "period_us": 100000, 00:22:26.764 "enable": false 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "bdev_malloc_create", 00:22:26.764 "params": { 00:22:26.764 "name": "malloc0", 00:22:26.764 "num_blocks": 8192, 00:22:26.764 "block_size": 4096, 00:22:26.764 "physical_block_size": 4096, 00:22:26.764 "uuid": "d4a82356-b7db-4695-b927-42e7928f5e23", 00:22:26.764 "optimal_io_boundary": 0, 00:22:26.764 "md_size": 0, 00:22:26.764 "dif_type": 0, 00:22:26.764 "dif_is_head_of_md": false, 00:22:26.764 "dif_pi_format": 0 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "bdev_wait_for_examine" 00:22:26.764 } 00:22:26.764 ] 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "subsystem": "nbd", 00:22:26.764 "config": [] 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "subsystem": "scheduler", 00:22:26.764 "config": [ 00:22:26.764 { 00:22:26.764 "method": "framework_set_scheduler", 00:22:26.764 "params": { 00:22:26.764 "name": "static" 00:22:26.764 } 00:22:26.764 } 00:22:26.764 ] 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "subsystem": "nvmf", 00:22:26.764 "config": [ 00:22:26.764 { 00:22:26.764 "method": "nvmf_set_config", 00:22:26.764 "params": { 00:22:26.764 "discovery_filter": "match_any", 00:22:26.764 "admin_cmd_passthru": { 00:22:26.764 "identify_ctrlr": false 00:22:26.764 }, 00:22:26.764 "dhchap_digests": [ 00:22:26.764 "sha256", 00:22:26.764 "sha384", 00:22:26.764 "sha512" 00:22:26.764 ], 00:22:26.764 "dhchap_dhgroups": [ 00:22:26.764 "null", 00:22:26.764 "ffdhe2048", 00:22:26.764 "ffdhe3072", 00:22:26.764 "ffdhe4096", 00:22:26.764 "ffdhe6144", 00:22:26.764 "ffdhe8192" 00:22:26.764 ] 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "nvmf_set_max_subsystems", 00:22:26.764 "params": { 00:22:26.764 "max_subsystems": 1024 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "nvmf_set_crdt", 00:22:26.764 "params": { 00:22:26.764 "crdt1": 0, 00:22:26.764 "crdt2": 0, 00:22:26.764 "crdt3": 0 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "nvmf_create_transport", 00:22:26.764 "params": { 00:22:26.764 "trtype": "TCP", 00:22:26.764 "max_queue_depth": 128, 00:22:26.764 "max_io_qpairs_per_ctrlr": 127, 00:22:26.764 "in_capsule_data_size": 4096, 00:22:26.764 "max_io_size": 131072, 00:22:26.764 "io_unit_size": 131072, 00:22:26.764 "max_aq_depth": 128, 00:22:26.764 "num_shared_buffers": 511, 00:22:26.764 "buf_cache_size": 4294967295, 00:22:26.764 "dif_insert_or_strip": false, 00:22:26.764 "zcopy": false, 00:22:26.764 "c2h_success": false, 00:22:26.764 "sock_priority": 0, 00:22:26.764 "abort_timeout_sec": 1, 00:22:26.764 "ack_timeout": 0, 00:22:26.764 "data_wr_pool_size": 0 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "nvmf_create_subsystem", 00:22:26.764 "params": { 00:22:26.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.764 "allow_any_host": false, 00:22:26.764 "serial_number": "00000000000000000000", 00:22:26.764 "model_number": "SPDK bdev Controller", 00:22:26.764 "max_namespaces": 32, 00:22:26.764 "min_cntlid": 1, 00:22:26.764 "max_cntlid": 65519, 00:22:26.764 "ana_reporting": false 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "nvmf_subsystem_add_host", 00:22:26.764 "params": { 00:22:26.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.764 "host": "nqn.2016-06.io.spdk:host1", 00:22:26.764 "psk": "key0" 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "nvmf_subsystem_add_ns", 00:22:26.764 "params": { 00:22:26.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.764 "namespace": { 00:22:26.764 "nsid": 1, 00:22:26.764 "bdev_name": "malloc0", 00:22:26.764 "nguid": "D4A82356B7DB4695B92742E7928F5E23", 00:22:26.764 "uuid": "d4a82356-b7db-4695-b927-42e7928f5e23", 00:22:26.764 "no_auto_visible": false 00:22:26.764 } 00:22:26.764 } 00:22:26.764 }, 00:22:26.764 { 00:22:26.764 "method": "nvmf_subsystem_add_listener", 00:22:26.764 "params": { 00:22:26.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.764 "listen_address": { 00:22:26.764 "trtype": "TCP", 00:22:26.764 "adrfam": "IPv4", 00:22:26.764 "traddr": "10.0.0.2", 00:22:26.764 "trsvcid": "4420" 00:22:26.764 }, 00:22:26.764 "secure_channel": false, 00:22:26.764 "sock_impl": "ssl" 00:22:26.764 } 00:22:26.764 } 00:22:26.764 ] 00:22:26.764 } 00:22:26.764 ] 00:22:26.764 }' 00:22:26.764 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.764 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3390175 00:22:26.764 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3390175 00:22:26.764 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:26.764 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3390175 ']' 00:22:26.764 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.764 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:26.764 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.764 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:26.764 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.764 [2024-10-07 09:43:26.298953] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:26.765 [2024-10-07 09:43:26.299007] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.765 [2024-10-07 09:43:26.383637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.025 [2024-10-07 09:43:26.437030] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.025 [2024-10-07 09:43:26.437065] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.025 [2024-10-07 09:43:26.437071] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.025 [2024-10-07 09:43:26.437076] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.025 [2024-10-07 09:43:26.437080] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.025 [2024-10-07 09:43:26.437580] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.025 [2024-10-07 09:43:26.639554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.025 [2024-10-07 09:43:26.671587] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:27.025 [2024-10-07 09:43:26.671804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@733 -- # xtrace_disable 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3390369 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3390369 /var/tmp/bdevperf.sock 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # '[' -z 3390369 ']' 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.598 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:27.598 "subsystems": [ 00:22:27.598 { 00:22:27.598 "subsystem": "keyring", 00:22:27.598 "config": [ 00:22:27.598 { 00:22:27.598 "method": "keyring_file_add_key", 00:22:27.598 "params": { 00:22:27.598 "name": "key0", 00:22:27.598 "path": "/tmp/tmp.pnUzoXj89B" 00:22:27.598 } 00:22:27.598 } 00:22:27.598 ] 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "subsystem": "iobuf", 00:22:27.598 "config": [ 00:22:27.598 { 00:22:27.598 "method": "iobuf_set_options", 00:22:27.598 "params": { 00:22:27.598 "small_pool_count": 8192, 00:22:27.598 "large_pool_count": 1024, 00:22:27.598 "small_bufsize": 8192, 00:22:27.598 "large_bufsize": 135168 00:22:27.598 } 00:22:27.598 } 00:22:27.598 ] 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "subsystem": "sock", 00:22:27.598 "config": [ 00:22:27.598 { 00:22:27.598 "method": "sock_set_default_impl", 00:22:27.598 "params": { 00:22:27.598 "impl_name": "posix" 00:22:27.598 } 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "method": "sock_impl_set_options", 00:22:27.598 "params": { 00:22:27.598 "impl_name": "ssl", 00:22:27.598 "recv_buf_size": 4096, 00:22:27.598 "send_buf_size": 4096, 00:22:27.598 "enable_recv_pipe": true, 00:22:27.598 "enable_quickack": false, 00:22:27.598 "enable_placement_id": 0, 00:22:27.598 "enable_zerocopy_send_server": true, 00:22:27.598 "enable_zerocopy_send_client": false, 00:22:27.598 "zerocopy_threshold": 0, 00:22:27.598 "tls_version": 0, 00:22:27.598 "enable_ktls": false 00:22:27.598 } 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "method": "sock_impl_set_options", 00:22:27.598 "params": { 00:22:27.598 "impl_name": "posix", 00:22:27.598 "recv_buf_size": 2097152, 00:22:27.598 "send_buf_size": 2097152, 00:22:27.598 "enable_recv_pipe": true, 00:22:27.598 "enable_quickack": false, 00:22:27.598 "enable_placement_id": 0, 00:22:27.598 "enable_zerocopy_send_server": true, 00:22:27.598 "enable_zerocopy_send_client": false, 00:22:27.598 "zerocopy_threshold": 0, 00:22:27.598 "tls_version": 0, 00:22:27.598 "enable_ktls": false 00:22:27.598 } 00:22:27.598 } 00:22:27.598 ] 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "subsystem": "vmd", 00:22:27.598 "config": [] 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "subsystem": "accel", 00:22:27.598 "config": [ 00:22:27.598 { 00:22:27.598 "method": "accel_set_options", 00:22:27.598 "params": { 00:22:27.598 "small_cache_size": 128, 00:22:27.598 "large_cache_size": 16, 00:22:27.598 "task_count": 2048, 00:22:27.598 "sequence_count": 2048, 00:22:27.598 "buf_count": 2048 00:22:27.598 } 00:22:27.598 } 00:22:27.598 ] 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "subsystem": "bdev", 00:22:27.598 "config": [ 00:22:27.598 { 00:22:27.598 "method": "bdev_set_options", 00:22:27.598 "params": { 00:22:27.598 "bdev_io_pool_size": 65535, 00:22:27.598 "bdev_io_cache_size": 256, 00:22:27.598 "bdev_auto_examine": true, 00:22:27.598 "iobuf_small_cache_size": 128, 00:22:27.598 "iobuf_large_cache_size": 16 00:22:27.598 } 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "method": "bdev_raid_set_options", 00:22:27.598 "params": { 00:22:27.598 "process_window_size_kb": 1024, 00:22:27.598 "process_max_bandwidth_mb_sec": 0 00:22:27.598 } 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "method": "bdev_iscsi_set_options", 00:22:27.598 "params": { 00:22:27.598 "timeout_sec": 30 00:22:27.598 } 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "method": "bdev_nvme_set_options", 00:22:27.598 "params": { 00:22:27.598 "action_on_timeout": "none", 00:22:27.598 "timeout_us": 0, 00:22:27.598 "timeout_admin_us": 0, 00:22:27.598 "keep_alive_timeout_ms": 10000, 00:22:27.598 "arbitration_burst": 0, 00:22:27.598 "low_priority_weight": 0, 00:22:27.598 "medium_priority_weight": 0, 00:22:27.598 "high_priority_weight": 0, 00:22:27.598 "nvme_adminq_poll_period_us": 10000, 00:22:27.598 "nvme_ioq_poll_period_us": 0, 00:22:27.598 "io_queue_requests": 512, 00:22:27.598 "delay_cmd_submit": true, 00:22:27.598 "transport_retry_count": 4, 00:22:27.598 "bdev_retry_count": 3, 00:22:27.598 "transport_ack_timeout": 0, 00:22:27.598 "ctrlr_loss_timeout_sec": 0, 00:22:27.598 "reconnect_delay_sec": 0, 00:22:27.598 "fast_io_fail_timeout_sec": 0, 00:22:27.598 "disable_auto_failback": false, 00:22:27.598 "generate_uuids": false, 00:22:27.598 "transport_tos": 0, 00:22:27.598 "nvme_error_stat": false, 00:22:27.598 "rdma_srq_size": 0, 00:22:27.598 "io_path_stat": false, 00:22:27.598 "allow_accel_sequence": false, 00:22:27.598 "rdma_max_cq_size": 0, 00:22:27.598 "rdma_cm_event_timeout_ms": 0, 00:22:27.598 "dhchap_digests": [ 00:22:27.598 "sha256", 00:22:27.598 "sha384", 00:22:27.598 "sha512" 00:22:27.598 ], 00:22:27.598 "dhchap_dhgroups": [ 00:22:27.598 "null", 00:22:27.598 "ffdhe2048", 00:22:27.598 "ffdhe3072", 00:22:27.598 "ffdhe4096", 00:22:27.598 "ffdhe6144", 00:22:27.598 "ffdhe8192" 00:22:27.598 ] 00:22:27.598 } 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "method": "bdev_nvme_attach_controller", 00:22:27.598 "params": { 00:22:27.598 "name": "nvme0", 00:22:27.598 "trtype": "TCP", 00:22:27.598 "adrfam": "IPv4", 00:22:27.598 "traddr": "10.0.0.2", 00:22:27.598 "trsvcid": "4420", 00:22:27.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.598 "prchk_reftag": false, 00:22:27.598 "prchk_guard": false, 00:22:27.598 "ctrlr_loss_timeout_sec": 0, 00:22:27.598 "reconnect_delay_sec": 0, 00:22:27.598 "fast_io_fail_timeout_sec": 0, 00:22:27.598 "psk": "key0", 00:22:27.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.598 "hdgst": false, 00:22:27.598 "ddgst": false 00:22:27.598 } 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "method": "bdev_nvme_set_hotplug", 00:22:27.598 "params": { 00:22:27.598 "period_us": 100000, 00:22:27.598 "enable": false 00:22:27.598 } 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "method": "bdev_enable_histogram", 00:22:27.598 "params": { 00:22:27.598 "name": "nvme0n1", 00:22:27.598 "enable": true 00:22:27.598 } 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "method": "bdev_wait_for_examine" 00:22:27.598 } 00:22:27.599 ] 00:22:27.599 }, 00:22:27.599 { 00:22:27.599 "subsystem": "nbd", 00:22:27.599 "config": [] 00:22:27.599 } 00:22:27.599 ] 00:22:27.599 }' 00:22:27.599 [2024-10-07 09:43:27.191167] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:27.599 [2024-10-07 09:43:27.191221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390369 ] 00:22:27.861 [2024-10-07 09:43:27.268722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.861 [2024-10-07 09:43:27.322128] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.861 [2024-10-07 09:43:27.457201] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.433 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:28.433 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # return 0 00:22:28.433 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.433 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:28.694 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.694 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:28.694 Running I/O for 1 seconds... 00:22:29.643 5161.00 IOPS, 20.16 MiB/s 00:22:29.643 Latency(us) 00:22:29.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.643 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:29.643 Verification LBA range: start 0x0 length 0x2000 00:22:29.643 nvme0n1 : 1.01 5224.62 20.41 0.00 0.00 24352.44 4642.13 79517.01 00:22:29.643 =================================================================================================================== 00:22:29.643 Total : 5224.62 20.41 0.00 0.00 24352.44 4642.13 79517.01 00:22:29.643 { 00:22:29.643 "results": [ 00:22:29.643 { 00:22:29.643 "job": "nvme0n1", 00:22:29.643 "core_mask": "0x2", 00:22:29.643 "workload": "verify", 00:22:29.643 "status": "finished", 00:22:29.643 "verify_range": { 00:22:29.643 "start": 0, 00:22:29.643 "length": 8192 00:22:29.643 }, 00:22:29.643 "queue_depth": 128, 00:22:29.643 "io_size": 4096, 00:22:29.643 "runtime": 1.012514, 00:22:29.643 "iops": 5224.61911637765, 00:22:29.643 "mibps": 20.408668423350196, 00:22:29.643 "io_failed": 0, 00:22:29.643 "io_timeout": 0, 00:22:29.643 "avg_latency_us": 24352.43634530561, 00:22:29.643 "min_latency_us": 4642.133333333333, 00:22:29.643 "max_latency_us": 79517.01333333334 00:22:29.643 } 00:22:29.643 ], 00:22:29.643 "core_count": 1 00:22:29.643 } 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # type=--id 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # id=0 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # '[' --id = --pid ']' 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@817 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@817 -- # shm_files=nvmf_trace.0 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # [[ -z nvmf_trace.0 ]] 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # for n in $shm_files 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:29.926 nvmf_trace.0 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@826 -- # return 0 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3390369 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3390369 ']' 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3390369 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3390369 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3390369' 00:22:29.926 killing process with pid 3390369 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3390369 00:22:29.926 Received shutdown signal, test time was about 1.000000 seconds 00:22:29.926 00:22:29.926 Latency(us) 00:22:29.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.926 =================================================================================================================== 00:22:29.926 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:29.926 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3390369 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:30.245 rmmod nvme_tcp 00:22:30.245 rmmod nvme_fabrics 00:22:30.245 rmmod nvme_keyring 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 3390175 ']' 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 3390175 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' -z 3390175 ']' 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # kill -0 3390175 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # uname 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3390175 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3390175' 00:22:30.245 killing process with pid 3390175 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # kill 3390175 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@977 -- # wait 3390175 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.245 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.823 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.823 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.f9BBLwJz7u /tmp/tmp.AiMWEjyzjP /tmp/tmp.pnUzoXj89B 00:22:32.823 00:22:32.823 real 1m28.354s 00:22:32.823 user 2m19.237s 00:22:32.823 sys 0m27.614s 00:22:32.823 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # xtrace_disable 00:22:32.823 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.823 ************************************ 00:22:32.823 END TEST nvmf_tls 00:22:32.823 ************************************ 00:22:32.823 09:43:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:32.823 09:43:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:22:32.823 09:43:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:22:32.823 09:43:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.823 ************************************ 00:22:32.823 START TEST nvmf_fips 00:22:32.823 ************************************ 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:32.823 * Looking for test storage... 00:22:32.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1626 -- # lcov --version 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:32.823 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:22:32.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.824 --rc genhtml_branch_coverage=1 00:22:32.824 --rc genhtml_function_coverage=1 00:22:32.824 --rc genhtml_legend=1 00:22:32.824 --rc geninfo_all_blocks=1 00:22:32.824 --rc geninfo_unexecuted_blocks=1 00:22:32.824 00:22:32.824 ' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:22:32.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.824 --rc genhtml_branch_coverage=1 00:22:32.824 --rc genhtml_function_coverage=1 00:22:32.824 --rc genhtml_legend=1 00:22:32.824 --rc geninfo_all_blocks=1 00:22:32.824 --rc geninfo_unexecuted_blocks=1 00:22:32.824 00:22:32.824 ' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:22:32.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.824 --rc genhtml_branch_coverage=1 00:22:32.824 --rc genhtml_function_coverage=1 00:22:32.824 --rc genhtml_legend=1 00:22:32.824 --rc geninfo_all_blocks=1 00:22:32.824 --rc geninfo_unexecuted_blocks=1 00:22:32.824 00:22:32.824 ' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:22:32.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.824 --rc genhtml_branch_coverage=1 00:22:32.824 --rc genhtml_function_coverage=1 00:22:32.824 --rc genhtml_legend=1 00:22:32.824 --rc geninfo_all_blocks=1 00:22:32.824 --rc geninfo_unexecuted_blocks=1 00:22:32.824 00:22:32.824 ' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:32.824 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # local es=0 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@641 -- # local arg=openssl 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@645 -- # type -t openssl 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@647 -- # type -P openssl 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@647 -- # arg=/usr/bin/openssl 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@647 -- # [[ -x /usr/bin/openssl ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@656 -- # openssl md5 /dev/fd/62 00:22:32.825 Error setting digest 00:22:32.825 40721CF8AB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:32.825 40721CF8AB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@656 -- # es=1 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.825 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.968 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:40.969 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:40.969 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:40.969 Found net devices under 0000:31:00.0: cvl_0_0 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:40.969 Found net devices under 0000:31:00.1: cvl_0_1 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.969 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:22:40.969 00:22:40.969 --- 10.0.0.2 ping statistics --- 00:22:40.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.969 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:22:40.969 00:22:40.969 --- 10.0.0.1 ping statistics --- 00:22:40.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.969 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=3395305 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 3395305 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # '[' -z 3395305 ']' 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:40.969 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:40.970 [2024-10-07 09:43:40.292817] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:40.970 [2024-10-07 09:43:40.292892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.970 [2024-10-07 09:43:40.383220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.970 [2024-10-07 09:43:40.474347] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.970 [2024-10-07 09:43:40.474406] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.970 [2024-10-07 09:43:40.474414] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.970 [2024-10-07 09:43:40.474422] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.970 [2024-10-07 09:43:40.474428] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.970 [2024-10-07 09:43:40.475216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@867 -- # return 0 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@733 -- # xtrace_disable 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.bc1 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:41.540 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.bc1 00:22:41.541 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.bc1 00:22:41.541 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.bc1 00:22:41.541 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.801 [2024-10-07 09:43:41.324569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.801 [2024-10-07 09:43:41.340562] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.801 [2024-10-07 09:43:41.340891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.801 malloc0 00:22:41.801 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.801 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3395526 00:22:41.801 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3395526 /var/tmp/bdevperf.sock 00:22:41.801 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.801 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # '[' -z 3395526 ']' 00:22:41.801 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.801 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:41.801 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.801 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:41.801 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:42.062 [2024-10-07 09:43:41.497514] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:42.062 [2024-10-07 09:43:41.497590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3395526 ] 00:22:42.062 [2024-10-07 09:43:41.579583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.062 [2024-10-07 09:43:41.672174] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.004 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:43.004 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@867 -- # return 0 00:22:43.004 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.bc1 00:22:43.004 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.264 [2024-10-07 09:43:42.674967] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.264 TLSTESTn1 00:22:43.264 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:43.264 Running I/O for 10 seconds... 00:22:53.636 6183.00 IOPS, 24.15 MiB/s 6500.00 IOPS, 25.39 MiB/s 6548.33 IOPS, 25.58 MiB/s 6596.75 IOPS, 25.77 MiB/s 6608.20 IOPS, 25.81 MiB/s 6607.00 IOPS, 25.81 MiB/s 6590.43 IOPS, 25.74 MiB/s 6615.25 IOPS, 25.84 MiB/s 6609.56 IOPS, 25.82 MiB/s 6600.10 IOPS, 25.78 MiB/s 00:22:53.636 Latency(us) 00:22:53.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.636 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:53.636 Verification LBA range: start 0x0 length 0x2000 00:22:53.636 TLSTESTn1 : 10.02 6602.22 25.79 0.00 0.00 19355.15 5570.56 26214.40 00:22:53.636 =================================================================================================================== 00:22:53.636 Total : 6602.22 25.79 0.00 0.00 19355.15 5570.56 26214.40 00:22:53.636 { 00:22:53.636 "results": [ 00:22:53.636 { 00:22:53.636 "job": "TLSTESTn1", 00:22:53.636 "core_mask": "0x4", 00:22:53.636 "workload": "verify", 00:22:53.636 "status": "finished", 00:22:53.636 "verify_range": { 00:22:53.636 "start": 0, 00:22:53.636 "length": 8192 00:22:53.636 }, 00:22:53.636 "queue_depth": 128, 00:22:53.636 "io_size": 4096, 00:22:53.636 "runtime": 10.016025, 00:22:53.636 "iops": 6602.219942542077, 00:22:53.636 "mibps": 25.789921650554987, 00:22:53.636 "io_failed": 0, 00:22:53.636 "io_timeout": 0, 00:22:53.636 "avg_latency_us": 19355.14975078635, 00:22:53.636 "min_latency_us": 5570.56, 00:22:53.636 "max_latency_us": 26214.4 00:22:53.636 } 00:22:53.636 ], 00:22:53.636 "core_count": 1 00:22:53.636 } 00:22:53.636 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:53.636 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:53.636 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # type=--id 00:22:53.636 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # id=0 00:22:53.636 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # '[' --id = --pid ']' 00:22:53.636 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@817 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:53.636 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@817 -- # shm_files=nvmf_trace.0 00:22:53.636 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # [[ -z nvmf_trace.0 ]] 00:22:53.636 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # for n in $shm_files 00:22:53.636 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:53.636 nvmf_trace.0 00:22:53.636 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@826 -- # return 0 00:22:53.636 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3395526 00:22:53.636 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' -z 3395526 ']' 00:22:53.636 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # kill -0 3395526 00:22:53.636 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # uname 00:22:53.636 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:53.636 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3395526 00:22:53.636 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:22:53.636 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3395526' 00:22:53.637 killing process with pid 3395526 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # kill 3395526 00:22:53.637 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.637 00:22:53.637 Latency(us) 00:22:53.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.637 =================================================================================================================== 00:22:53.637 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@977 -- # wait 3395526 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:53.637 rmmod nvme_tcp 00:22:53.637 rmmod nvme_fabrics 00:22:53.637 rmmod nvme_keyring 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 3395305 ']' 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 3395305 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' -z 3395305 ']' 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # kill -0 3395305 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # uname 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:53.637 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3395305 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3395305' 00:22:53.897 killing process with pid 3395305 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # kill 3395305 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@977 -- # wait 3395305 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.897 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.bc1 00:22:56.437 00:22:56.437 real 0m23.535s 00:22:56.437 user 0m25.758s 00:22:56.437 sys 0m9.124s 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # xtrace_disable 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:56.437 ************************************ 00:22:56.437 END TEST nvmf_fips 00:22:56.437 ************************************ 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:56.437 ************************************ 00:22:56.437 START TEST nvmf_control_msg_list 00:22:56.437 ************************************ 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:56.437 * Looking for test storage... 00:22:56.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1626 -- # lcov --version 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:22:56.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.437 --rc genhtml_branch_coverage=1 00:22:56.437 --rc genhtml_function_coverage=1 00:22:56.437 --rc genhtml_legend=1 00:22:56.437 --rc geninfo_all_blocks=1 00:22:56.437 --rc geninfo_unexecuted_blocks=1 00:22:56.437 00:22:56.437 ' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:22:56.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.437 --rc genhtml_branch_coverage=1 00:22:56.437 --rc genhtml_function_coverage=1 00:22:56.437 --rc genhtml_legend=1 00:22:56.437 --rc geninfo_all_blocks=1 00:22:56.437 --rc geninfo_unexecuted_blocks=1 00:22:56.437 00:22:56.437 ' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:22:56.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.437 --rc genhtml_branch_coverage=1 00:22:56.437 --rc genhtml_function_coverage=1 00:22:56.437 --rc genhtml_legend=1 00:22:56.437 --rc geninfo_all_blocks=1 00:22:56.437 --rc geninfo_unexecuted_blocks=1 00:22:56.437 00:22:56.437 ' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:22:56.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.437 --rc genhtml_branch_coverage=1 00:22:56.437 --rc genhtml_function_coverage=1 00:22:56.437 --rc genhtml_legend=1 00:22:56.437 --rc geninfo_all_blocks=1 00:22:56.437 --rc geninfo_unexecuted_blocks=1 00:22:56.437 00:22:56.437 ' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:56.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:56.437 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.438 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.438 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.438 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:56.438 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:56.438 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.438 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:04.600 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:04.600 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:04.600 Found net devices under 0000:31:00.0: cvl_0_0 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:04.600 Found net devices under 0000:31:00.1: cvl_0_1 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:23:04.600 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:04.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:23:04.601 00:23:04.601 --- 10.0.0.2 ping statistics --- 00:23:04.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.601 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:23:04.601 00:23:04.601 --- 10.0.0.1 ping statistics --- 00:23:04.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.601 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=3402080 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 3402080 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@834 -- # '[' -z 3402080 ']' 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:04.601 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:04.601 [2024-10-07 09:44:03.694961] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:04.601 [2024-10-07 09:44:03.695028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.601 [2024-10-07 09:44:03.785044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.601 [2024-10-07 09:44:03.880651] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.601 [2024-10-07 09:44:03.880709] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.601 [2024-10-07 09:44:03.880717] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.601 [2024-10-07 09:44:03.880725] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.601 [2024-10-07 09:44:03.880731] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.601 [2024-10-07 09:44:03.881498] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.862 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:04.862 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@867 -- # return 0 00:23:04.862 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:04.862 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@733 -- # xtrace_disable 00:23:04.862 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:05.123 [2024-10-07 09:44:04.555520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:05.123 Malloc0 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:05.123 [2024-10-07 09:44:04.624663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3402325 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3402327 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3402329 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3402325 00:23:05.123 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:05.123 [2024-10-07 09:44:04.715532] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:05.123 [2024-10-07 09:44:04.715909] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:05.123 [2024-10-07 09:44:04.716215] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:06.506 Initializing NVMe Controllers 00:23:06.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:06.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:06.506 Initialization complete. Launching workers. 00:23:06.506 ======================================================== 00:23:06.506 Latency(us) 00:23:06.506 Device Information : IOPS MiB/s Average min max 00:23:06.506 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1517.00 5.93 659.04 286.17 1022.84 00:23:06.506 ======================================================== 00:23:06.506 Total : 1517.00 5.93 659.04 286.17 1022.84 00:23:06.506 00:23:06.506 Initializing NVMe Controllers 00:23:06.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:06.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:06.506 Initialization complete. Launching workers. 00:23:06.506 ======================================================== 00:23:06.506 Latency(us) 00:23:06.506 Device Information : IOPS MiB/s Average min max 00:23:06.506 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40913.47 40831.83 41123.50 00:23:06.506 ======================================================== 00:23:06.507 Total : 25.00 0.10 40913.47 40831.83 41123.50 00:23:06.507 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3402327 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3402329 00:23:06.507 Initializing NVMe Controllers 00:23:06.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:06.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:06.507 Initialization complete. Launching workers. 00:23:06.507 ======================================================== 00:23:06.507 Latency(us) 00:23:06.507 Device Information : IOPS MiB/s Average min max 00:23:06.507 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1514.00 5.91 660.64 295.85 866.01 00:23:06.507 ======================================================== 00:23:06.507 Total : 1514.00 5.91 660.64 295.85 866.01 00:23:06.507 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.507 rmmod nvme_tcp 00:23:06.507 rmmod nvme_fabrics 00:23:06.507 rmmod nvme_keyring 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 3402080 ']' 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 3402080 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@953 -- # '[' -z 3402080 ']' 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # kill -0 3402080 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # uname 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:06.507 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3402080 00:23:06.507 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:06.507 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:06.507 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3402080' 00:23:06.507 killing process with pid 3402080 00:23:06.507 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # kill 3402080 00:23:06.507 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@977 -- # wait 3402080 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.767 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.678 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:08.678 00:23:08.678 real 0m12.666s 00:23:08.678 user 0m7.841s 00:23:08.678 sys 0m6.799s 00:23:08.678 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # xtrace_disable 00:23:08.678 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:08.678 ************************************ 00:23:08.678 END TEST nvmf_control_msg_list 00:23:08.678 ************************************ 00:23:08.938 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:08.938 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:23:08.938 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:23:08.938 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:08.938 ************************************ 00:23:08.938 START TEST nvmf_wait_for_buf 00:23:08.938 ************************************ 00:23:08.938 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:08.938 * Looking for test storage... 00:23:08.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:08.938 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:23:08.938 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1626 -- # lcov --version 00:23:08.938 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:09.198 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:23:09.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.199 --rc genhtml_branch_coverage=1 00:23:09.199 --rc genhtml_function_coverage=1 00:23:09.199 --rc genhtml_legend=1 00:23:09.199 --rc geninfo_all_blocks=1 00:23:09.199 --rc geninfo_unexecuted_blocks=1 00:23:09.199 00:23:09.199 ' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:23:09.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.199 --rc genhtml_branch_coverage=1 00:23:09.199 --rc genhtml_function_coverage=1 00:23:09.199 --rc genhtml_legend=1 00:23:09.199 --rc geninfo_all_blocks=1 00:23:09.199 --rc geninfo_unexecuted_blocks=1 00:23:09.199 00:23:09.199 ' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:23:09.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.199 --rc genhtml_branch_coverage=1 00:23:09.199 --rc genhtml_function_coverage=1 00:23:09.199 --rc genhtml_legend=1 00:23:09.199 --rc geninfo_all_blocks=1 00:23:09.199 --rc geninfo_unexecuted_blocks=1 00:23:09.199 00:23:09.199 ' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:23:09.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.199 --rc genhtml_branch_coverage=1 00:23:09.199 --rc genhtml_function_coverage=1 00:23:09.199 --rc genhtml_legend=1 00:23:09.199 --rc geninfo_all_blocks=1 00:23:09.199 --rc geninfo_unexecuted_blocks=1 00:23:09.199 00:23:09.199 ' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.199 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.337 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:17.338 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:17.338 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:17.338 Found net devices under 0000:31:00.0: cvl_0_0 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:17.338 Found net devices under 0000:31:00.1: cvl_0_1 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:17.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:23:17.338 00:23:17.338 --- 10.0.0.2 ping statistics --- 00:23:17.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.338 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:23:17.338 00:23:17.338 --- 10.0.0.1 ping statistics --- 00:23:17.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.338 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=3406845 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 3406845 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:17.338 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@834 -- # '[' -z 3406845 ']' 00:23:17.339 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.339 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:17.339 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.339 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:17.339 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.339 [2024-10-07 09:44:16.461406] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:17.339 [2024-10-07 09:44:16.461470] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.339 [2024-10-07 09:44:16.551326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.339 [2024-10-07 09:44:16.645319] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.339 [2024-10-07 09:44:16.645369] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.339 [2024-10-07 09:44:16.645378] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.339 [2024-10-07 09:44:16.645385] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.339 [2024-10-07 09:44:16.645392] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.339 [2024-10-07 09:44:16.646190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@867 -- # return 0 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@733 -- # xtrace_disable 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.909 Malloc0 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.909 [2024-10-07 09:44:17.438851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.909 [2024-10-07 09:44:17.475169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.909 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:17.909 [2024-10-07 09:44:17.557719] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:19.827 Initializing NVMe Controllers 00:23:19.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:19.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:19.827 Initialization complete. Launching workers. 00:23:19.827 ======================================================== 00:23:19.827 Latency(us) 00:23:19.827 Device Information : IOPS MiB/s Average min max 00:23:19.827 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.57 7991.79 63859.53 00:23:19.827 ======================================================== 00:23:19.827 Total : 129.00 16.12 32294.57 7991.79 63859.53 00:23:19.827 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:19.827 rmmod nvme_tcp 00:23:19.827 rmmod nvme_fabrics 00:23:19.827 rmmod nvme_keyring 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 3406845 ']' 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 3406845 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@953 -- # '[' -z 3406845 ']' 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # kill -0 3406845 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # uname 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3406845 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3406845' 00:23:19.827 killing process with pid 3406845 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # kill 3406845 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@977 -- # wait 3406845 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.827 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.371 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.371 00:23:22.371 real 0m13.104s 00:23:22.371 user 0m5.263s 00:23:22.371 sys 0m6.417s 00:23:22.371 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:23:22.371 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.371 ************************************ 00:23:22.371 END TEST nvmf_wait_for_buf 00:23:22.371 ************************************ 00:23:22.371 09:44:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:23:22.371 09:44:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:23:22.371 09:44:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:23:22.371 09:44:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:23:22.371 09:44:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.371 09:44:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.510 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:30.511 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:30.511 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:30.511 Found net devices under 0000:31:00.0: cvl_0_0 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:30.511 Found net devices under 0000:31:00.1: cvl_0_1 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:30.511 ************************************ 00:23:30.511 START TEST nvmf_perf_adq 00:23:30.511 ************************************ 00:23:30.511 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:30.511 * Looking for test storage... 00:23:30.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1626 -- # lcov --version 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:23:30.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.511 --rc genhtml_branch_coverage=1 00:23:30.511 --rc genhtml_function_coverage=1 00:23:30.511 --rc genhtml_legend=1 00:23:30.511 --rc geninfo_all_blocks=1 00:23:30.511 --rc geninfo_unexecuted_blocks=1 00:23:30.511 00:23:30.511 ' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:23:30.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.511 --rc genhtml_branch_coverage=1 00:23:30.511 --rc genhtml_function_coverage=1 00:23:30.511 --rc genhtml_legend=1 00:23:30.511 --rc geninfo_all_blocks=1 00:23:30.511 --rc geninfo_unexecuted_blocks=1 00:23:30.511 00:23:30.511 ' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:23:30.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.511 --rc genhtml_branch_coverage=1 00:23:30.511 --rc genhtml_function_coverage=1 00:23:30.511 --rc genhtml_legend=1 00:23:30.511 --rc geninfo_all_blocks=1 00:23:30.511 --rc geninfo_unexecuted_blocks=1 00:23:30.511 00:23:30.511 ' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:23:30.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.511 --rc genhtml_branch_coverage=1 00:23:30.511 --rc genhtml_function_coverage=1 00:23:30.511 --rc genhtml_legend=1 00:23:30.511 --rc geninfo_all_blocks=1 00:23:30.511 --rc geninfo_unexecuted_blocks=1 00:23:30.511 00:23:30.511 ' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.511 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.115 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:37.116 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:37.116 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:37.116 Found net devices under 0000:31:00.0: cvl_0_0 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:37.116 Found net devices under 0000:31:00.1: cvl_0_1 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:37.116 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:39.030 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:41.079 09:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:46.374 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.374 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:46.375 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:46.375 Found net devices under 0000:31:00.0: cvl_0_0 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:46.375 Found net devices under 0000:31:00.1: cvl_0_1 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:23:46.375 00:23:46.375 --- 10.0.0.2 ping statistics --- 00:23:46.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.375 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:23:46.375 00:23:46.375 --- 10.0.0.1 ping statistics --- 00:23:46.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.375 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3417356 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3417356 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # '[' -z 3417356 ']' 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:46.375 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.375 [2024-10-07 09:44:45.938027] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:46.375 [2024-10-07 09:44:45.938096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.375 [2024-10-07 09:44:46.027567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.638 [2024-10-07 09:44:46.125344] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.638 [2024-10-07 09:44:46.125409] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.638 [2024-10-07 09:44:46.125418] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.638 [2024-10-07 09:44:46.125425] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.638 [2024-10-07 09:44:46.125432] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.638 [2024-10-07 09:44:46.127525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.638 [2024-10-07 09:44:46.127696] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.638 [2024-10-07 09:44:46.127754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.638 [2024-10-07 09:44:46.127754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@867 -- # return 0 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@733 -- # xtrace_disable 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:47.209 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.210 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:47.210 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:47.210 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:47.210 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.470 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:47.470 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:47.470 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:47.470 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.470 [2024-10-07 09:44:46.973165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.470 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:47.471 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:47.471 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:47.471 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.471 Malloc1 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.471 [2024-10-07 09:44:47.038846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3417595 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:47.471 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:50.019 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:50.019 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:50.019 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:50.019 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:50.019 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:50.019 "tick_rate": 2400000000, 00:23:50.019 "poll_groups": [ 00:23:50.019 { 00:23:50.019 "name": "nvmf_tgt_poll_group_000", 00:23:50.019 "admin_qpairs": 1, 00:23:50.019 "io_qpairs": 1, 00:23:50.019 "current_admin_qpairs": 1, 00:23:50.019 "current_io_qpairs": 1, 00:23:50.019 "pending_bdev_io": 0, 00:23:50.019 "completed_nvme_io": 16985, 00:23:50.019 "transports": [ 00:23:50.019 { 00:23:50.019 "trtype": "TCP" 00:23:50.019 } 00:23:50.019 ] 00:23:50.019 }, 00:23:50.019 { 00:23:50.019 "name": "nvmf_tgt_poll_group_001", 00:23:50.019 "admin_qpairs": 0, 00:23:50.019 "io_qpairs": 1, 00:23:50.019 "current_admin_qpairs": 0, 00:23:50.019 "current_io_qpairs": 1, 00:23:50.019 "pending_bdev_io": 0, 00:23:50.019 "completed_nvme_io": 19031, 00:23:50.019 "transports": [ 00:23:50.019 { 00:23:50.019 "trtype": "TCP" 00:23:50.019 } 00:23:50.019 ] 00:23:50.019 }, 00:23:50.019 { 00:23:50.019 "name": "nvmf_tgt_poll_group_002", 00:23:50.019 "admin_qpairs": 0, 00:23:50.019 "io_qpairs": 1, 00:23:50.019 "current_admin_qpairs": 0, 00:23:50.019 "current_io_qpairs": 1, 00:23:50.019 "pending_bdev_io": 0, 00:23:50.019 "completed_nvme_io": 19878, 00:23:50.019 "transports": [ 00:23:50.019 { 00:23:50.019 "trtype": "TCP" 00:23:50.019 } 00:23:50.019 ] 00:23:50.019 }, 00:23:50.019 { 00:23:50.019 "name": "nvmf_tgt_poll_group_003", 00:23:50.019 "admin_qpairs": 0, 00:23:50.019 "io_qpairs": 1, 00:23:50.019 "current_admin_qpairs": 0, 00:23:50.019 "current_io_qpairs": 1, 00:23:50.019 "pending_bdev_io": 0, 00:23:50.019 "completed_nvme_io": 17383, 00:23:50.019 "transports": [ 00:23:50.019 { 00:23:50.019 "trtype": "TCP" 00:23:50.019 } 00:23:50.019 ] 00:23:50.019 } 00:23:50.019 ] 00:23:50.019 }' 00:23:50.019 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:50.019 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:50.019 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:50.019 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:50.019 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3417595 00:23:58.153 Initializing NVMe Controllers 00:23:58.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:58.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:58.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:58.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:58.153 Initialization complete. Launching workers. 00:23:58.153 ======================================================== 00:23:58.153 Latency(us) 00:23:58.153 Device Information : IOPS MiB/s Average min max 00:23:58.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13568.62 53.00 4717.21 1483.76 11601.77 00:23:58.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13544.62 52.91 4725.52 1350.55 13208.37 00:23:58.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12683.13 49.54 5046.14 1284.28 13948.76 00:23:58.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12888.22 50.34 4964.88 1212.16 12920.92 00:23:58.153 ======================================================== 00:23:58.153 Total : 52684.59 205.80 4859.12 1212.16 13948.76 00:23:58.153 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:58.153 rmmod nvme_tcp 00:23:58.153 rmmod nvme_fabrics 00:23:58.153 rmmod nvme_keyring 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3417356 ']' 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3417356 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' -z 3417356 ']' 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # kill -0 3417356 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # uname 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3417356 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3417356' 00:23:58.153 killing process with pid 3417356 00:23:58.153 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # kill 3417356 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@977 -- # wait 3417356 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.154 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.064 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:00.064 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:00.064 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:00.064 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:01.978 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:03.890 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:09.180 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:09.180 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.180 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:09.181 Found net devices under 0000:31:00.0: cvl_0_0 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:09.181 Found net devices under 0000:31:00.1: cvl_0_1 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.181 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:09.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:24:09.442 00:24:09.442 --- 10.0.0.2 ping statistics --- 00:24:09.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.442 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:24:09.442 00:24:09.442 --- 10.0.0.1 ping statistics --- 00:24:09.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.442 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:09.442 net.core.busy_poll = 1 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:09.442 net.core.busy_read = 1 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:09.442 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3422681 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3422681 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # '[' -z 3422681 ']' 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:09.702 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:09.702 [2024-10-07 09:45:09.263485] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:09.702 [2024-10-07 09:45:09.263551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.703 [2024-10-07 09:45:09.357761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.963 [2024-10-07 09:45:09.453751] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.963 [2024-10-07 09:45:09.453817] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.963 [2024-10-07 09:45:09.453825] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.963 [2024-10-07 09:45:09.453833] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.964 [2024-10-07 09:45:09.453840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.964 [2024-10-07 09:45:09.456332] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.964 [2024-10-07 09:45:09.456490] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.964 [2024-10-07 09:45:09.456665] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.964 [2024-10-07 09:45:09.456667] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@867 -- # return 0 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@733 -- # xtrace_disable 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:10.537 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:10.797 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:10.797 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:10.798 [2024-10-07 09:45:10.351727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:10.798 Malloc1 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:10.798 [2024-10-07 09:45:10.417746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3422917 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:24:10.798 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:13.346 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:24:13.346 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:13.346 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.346 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:13.346 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:24:13.346 "tick_rate": 2400000000, 00:24:13.346 "poll_groups": [ 00:24:13.346 { 00:24:13.346 "name": "nvmf_tgt_poll_group_000", 00:24:13.346 "admin_qpairs": 1, 00:24:13.346 "io_qpairs": 2, 00:24:13.346 "current_admin_qpairs": 1, 00:24:13.346 "current_io_qpairs": 2, 00:24:13.346 "pending_bdev_io": 0, 00:24:13.346 "completed_nvme_io": 24998, 00:24:13.346 "transports": [ 00:24:13.346 { 00:24:13.346 "trtype": "TCP" 00:24:13.346 } 00:24:13.346 ] 00:24:13.346 }, 00:24:13.346 { 00:24:13.346 "name": "nvmf_tgt_poll_group_001", 00:24:13.346 "admin_qpairs": 0, 00:24:13.346 "io_qpairs": 2, 00:24:13.346 "current_admin_qpairs": 0, 00:24:13.346 "current_io_qpairs": 2, 00:24:13.346 "pending_bdev_io": 0, 00:24:13.346 "completed_nvme_io": 25412, 00:24:13.346 "transports": [ 00:24:13.346 { 00:24:13.346 "trtype": "TCP" 00:24:13.346 } 00:24:13.346 ] 00:24:13.346 }, 00:24:13.346 { 00:24:13.346 "name": "nvmf_tgt_poll_group_002", 00:24:13.346 "admin_qpairs": 0, 00:24:13.346 "io_qpairs": 0, 00:24:13.346 "current_admin_qpairs": 0, 00:24:13.346 "current_io_qpairs": 0, 00:24:13.346 "pending_bdev_io": 0, 00:24:13.346 "completed_nvme_io": 0, 00:24:13.346 "transports": [ 00:24:13.346 { 00:24:13.346 "trtype": "TCP" 00:24:13.346 } 00:24:13.346 ] 00:24:13.346 }, 00:24:13.346 { 00:24:13.346 "name": "nvmf_tgt_poll_group_003", 00:24:13.346 "admin_qpairs": 0, 00:24:13.346 "io_qpairs": 0, 00:24:13.346 "current_admin_qpairs": 0, 00:24:13.346 "current_io_qpairs": 0, 00:24:13.346 "pending_bdev_io": 0, 00:24:13.346 "completed_nvme_io": 0, 00:24:13.346 "transports": [ 00:24:13.346 { 00:24:13.346 "trtype": "TCP" 00:24:13.346 } 00:24:13.346 ] 00:24:13.346 } 00:24:13.346 ] 00:24:13.346 }' 00:24:13.346 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:13.346 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:24:13.346 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:24:13.346 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:24:13.346 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3422917 00:24:21.485 Initializing NVMe Controllers 00:24:21.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:21.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:21.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:21.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:21.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:21.485 Initialization complete. Launching workers. 00:24:21.485 ======================================================== 00:24:21.485 Latency(us) 00:24:21.485 Device Information : IOPS MiB/s Average min max 00:24:21.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9663.80 37.75 6624.68 1392.32 55185.18 00:24:21.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10041.70 39.23 6372.38 1178.43 54205.36 00:24:21.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9241.40 36.10 6926.99 1079.29 52887.42 00:24:21.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8758.60 34.21 7306.58 1257.10 52565.03 00:24:21.485 ======================================================== 00:24:21.485 Total : 37705.50 147.29 6789.98 1079.29 55185.18 00:24:21.485 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.485 rmmod nvme_tcp 00:24:21.485 rmmod nvme_fabrics 00:24:21.485 rmmod nvme_keyring 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3422681 ']' 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3422681 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' -z 3422681 ']' 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # kill -0 3422681 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # uname 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3422681 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3422681' 00:24:21.485 killing process with pid 3422681 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # kill 3422681 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@977 -- # wait 3422681 00:24:21.485 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.486 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.784 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:24.784 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:24.784 00:24:24.784 real 0m54.986s 00:24:24.784 user 2m49.613s 00:24:24.784 sys 0m11.749s 00:24:24.784 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # xtrace_disable 00:24:24.784 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:24.784 ************************************ 00:24:24.784 END TEST nvmf_perf_adq 00:24:24.784 ************************************ 00:24:24.784 09:45:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:24.784 09:45:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:24:24.784 09:45:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1110 -- # xtrace_disable 00:24:24.784 09:45:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:24.784 ************************************ 00:24:24.784 START TEST nvmf_shutdown 00:24:24.784 ************************************ 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:24.784 * Looking for test storage... 00:24:24.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1626 -- # lcov --version 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:24.784 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:24:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.785 --rc genhtml_branch_coverage=1 00:24:24.785 --rc genhtml_function_coverage=1 00:24:24.785 --rc genhtml_legend=1 00:24:24.785 --rc geninfo_all_blocks=1 00:24:24.785 --rc geninfo_unexecuted_blocks=1 00:24:24.785 00:24:24.785 ' 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:24:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.785 --rc genhtml_branch_coverage=1 00:24:24.785 --rc genhtml_function_coverage=1 00:24:24.785 --rc genhtml_legend=1 00:24:24.785 --rc geninfo_all_blocks=1 00:24:24.785 --rc geninfo_unexecuted_blocks=1 00:24:24.785 00:24:24.785 ' 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:24:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.785 --rc genhtml_branch_coverage=1 00:24:24.785 --rc genhtml_function_coverage=1 00:24:24.785 --rc genhtml_legend=1 00:24:24.785 --rc geninfo_all_blocks=1 00:24:24.785 --rc geninfo_unexecuted_blocks=1 00:24:24.785 00:24:24.785 ' 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:24:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.785 --rc genhtml_branch_coverage=1 00:24:24.785 --rc genhtml_function_coverage=1 00:24:24.785 --rc genhtml_legend=1 00:24:24.785 --rc geninfo_all_blocks=1 00:24:24.785 --rc geninfo_unexecuted_blocks=1 00:24:24.785 00:24:24.785 ' 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.785 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1110 -- # xtrace_disable 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:24.786 ************************************ 00:24:24.786 START TEST nvmf_shutdown_tc1 00:24:24.786 ************************************ 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # nvmf_shutdown_tc1 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:24.786 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.923 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:32.924 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:32.924 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:32.924 Found net devices under 0000:31:00.0: cvl_0_0 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:32.924 Found net devices under 0000:31:00.1: cvl_0_1 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:32.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:24:32.924 00:24:32.924 --- 10.0.0.2 ping statistics --- 00:24:32.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.924 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:24:32.924 00:24:32.924 --- 10.0.0.1 ping statistics --- 00:24:32.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.924 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=3429686 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 3429686 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:32.924 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # '[' -z 3429686 ']' 00:24:32.925 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.925 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:32.925 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.925 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:32.925 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:32.925 [2024-10-07 09:45:32.016573] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:32.925 [2024-10-07 09:45:32.016647] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.925 [2024-10-07 09:45:32.110727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.925 [2024-10-07 09:45:32.205107] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.925 [2024-10-07 09:45:32.205173] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.925 [2024-10-07 09:45:32.205182] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.925 [2024-10-07 09:45:32.205190] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.925 [2024-10-07 09:45:32.205196] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.925 [2024-10-07 09:45:32.207691] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.925 [2024-10-07 09:45:32.207859] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.925 [2024-10-07 09:45:32.208025] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:24:32.925 [2024-10-07 09:45:32.208026] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.184 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:33.184 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@867 -- # return 0 00:24:33.184 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:33.184 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@733 -- # xtrace_disable 00:24:33.185 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:33.445 [2024-10-07 09:45:32.875099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:33.445 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:33.446 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:33.446 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:33.446 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:33.446 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:33.446 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:33.446 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:33.446 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:33.446 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:33.446 Malloc1 00:24:33.446 [2024-10-07 09:45:32.982568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.446 Malloc2 00:24:33.446 Malloc3 00:24:33.446 Malloc4 00:24:33.706 Malloc5 00:24:33.706 Malloc6 00:24:33.706 Malloc7 00:24:33.706 Malloc8 00:24:33.706 Malloc9 00:24:33.706 Malloc10 00:24:33.706 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:33.706 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:33.706 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@733 -- # xtrace_disable 00:24:33.706 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3429914 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3429914 /var/tmp/bdevperf.sock 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # '[' -z 3429914 ']' 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:33.968 { 00:24:33.968 "params": { 00:24:33.968 "name": "Nvme$subsystem", 00:24:33.968 "trtype": "$TEST_TRANSPORT", 00:24:33.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.968 "adrfam": "ipv4", 00:24:33.968 "trsvcid": "$NVMF_PORT", 00:24:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.968 "hdgst": ${hdgst:-false}, 00:24:33.968 "ddgst": ${ddgst:-false} 00:24:33.968 }, 00:24:33.968 "method": "bdev_nvme_attach_controller" 00:24:33.968 } 00:24:33.968 EOF 00:24:33.968 )") 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:33.968 { 00:24:33.968 "params": { 00:24:33.968 "name": "Nvme$subsystem", 00:24:33.968 "trtype": "$TEST_TRANSPORT", 00:24:33.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.968 "adrfam": "ipv4", 00:24:33.968 "trsvcid": "$NVMF_PORT", 00:24:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.968 "hdgst": ${hdgst:-false}, 00:24:33.968 "ddgst": ${ddgst:-false} 00:24:33.968 }, 00:24:33.968 "method": "bdev_nvme_attach_controller" 00:24:33.968 } 00:24:33.968 EOF 00:24:33.968 )") 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:33.968 { 00:24:33.968 "params": { 00:24:33.968 "name": "Nvme$subsystem", 00:24:33.968 "trtype": "$TEST_TRANSPORT", 00:24:33.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.968 "adrfam": "ipv4", 00:24:33.968 "trsvcid": "$NVMF_PORT", 00:24:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.968 "hdgst": ${hdgst:-false}, 00:24:33.968 "ddgst": ${ddgst:-false} 00:24:33.968 }, 00:24:33.968 "method": "bdev_nvme_attach_controller" 00:24:33.968 } 00:24:33.968 EOF 00:24:33.968 )") 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:33.968 { 00:24:33.968 "params": { 00:24:33.968 "name": "Nvme$subsystem", 00:24:33.968 "trtype": "$TEST_TRANSPORT", 00:24:33.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.968 "adrfam": "ipv4", 00:24:33.968 "trsvcid": "$NVMF_PORT", 00:24:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.968 "hdgst": ${hdgst:-false}, 00:24:33.968 "ddgst": ${ddgst:-false} 00:24:33.968 }, 00:24:33.968 "method": "bdev_nvme_attach_controller" 00:24:33.968 } 00:24:33.968 EOF 00:24:33.968 )") 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:33.968 { 00:24:33.968 "params": { 00:24:33.968 "name": "Nvme$subsystem", 00:24:33.968 "trtype": "$TEST_TRANSPORT", 00:24:33.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.968 "adrfam": "ipv4", 00:24:33.968 "trsvcid": "$NVMF_PORT", 00:24:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.968 "hdgst": ${hdgst:-false}, 00:24:33.968 "ddgst": ${ddgst:-false} 00:24:33.968 }, 00:24:33.968 "method": "bdev_nvme_attach_controller" 00:24:33.968 } 00:24:33.968 EOF 00:24:33.968 )") 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:33.968 { 00:24:33.968 "params": { 00:24:33.968 "name": "Nvme$subsystem", 00:24:33.968 "trtype": "$TEST_TRANSPORT", 00:24:33.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.968 "adrfam": "ipv4", 00:24:33.968 "trsvcid": "$NVMF_PORT", 00:24:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.968 "hdgst": ${hdgst:-false}, 00:24:33.968 "ddgst": ${ddgst:-false} 00:24:33.968 }, 00:24:33.968 "method": "bdev_nvme_attach_controller" 00:24:33.968 } 00:24:33.968 EOF 00:24:33.968 )") 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:33.968 [2024-10-07 09:45:33.432639] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:33.968 [2024-10-07 09:45:33.432691] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:33.968 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:33.968 { 00:24:33.968 "params": { 00:24:33.968 "name": "Nvme$subsystem", 00:24:33.968 "trtype": "$TEST_TRANSPORT", 00:24:33.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.968 "adrfam": "ipv4", 00:24:33.968 "trsvcid": "$NVMF_PORT", 00:24:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.969 "hdgst": ${hdgst:-false}, 00:24:33.969 "ddgst": ${ddgst:-false} 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 } 00:24:33.969 EOF 00:24:33.969 )") 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:33.969 { 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme$subsystem", 00:24:33.969 "trtype": "$TEST_TRANSPORT", 00:24:33.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "$NVMF_PORT", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.969 "hdgst": ${hdgst:-false}, 00:24:33.969 "ddgst": ${ddgst:-false} 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 } 00:24:33.969 EOF 00:24:33.969 )") 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:33.969 { 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme$subsystem", 00:24:33.969 "trtype": "$TEST_TRANSPORT", 00:24:33.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "$NVMF_PORT", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.969 "hdgst": ${hdgst:-false}, 00:24:33.969 "ddgst": ${ddgst:-false} 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 } 00:24:33.969 EOF 00:24:33.969 )") 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:33.969 { 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme$subsystem", 00:24:33.969 "trtype": "$TEST_TRANSPORT", 00:24:33.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "$NVMF_PORT", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.969 "hdgst": ${hdgst:-false}, 00:24:33.969 "ddgst": ${ddgst:-false} 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 } 00:24:33.969 EOF 00:24:33.969 )") 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:24:33.969 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme1", 00:24:33.969 "trtype": "tcp", 00:24:33.969 "traddr": "10.0.0.2", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "4420", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.969 "hdgst": false, 00:24:33.969 "ddgst": false 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 },{ 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme2", 00:24:33.969 "trtype": "tcp", 00:24:33.969 "traddr": "10.0.0.2", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "4420", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:33.969 "hdgst": false, 00:24:33.969 "ddgst": false 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 },{ 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme3", 00:24:33.969 "trtype": "tcp", 00:24:33.969 "traddr": "10.0.0.2", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "4420", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:33.969 "hdgst": false, 00:24:33.969 "ddgst": false 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 },{ 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme4", 00:24:33.969 "trtype": "tcp", 00:24:33.969 "traddr": "10.0.0.2", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "4420", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:33.969 "hdgst": false, 00:24:33.969 "ddgst": false 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 },{ 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme5", 00:24:33.969 "trtype": "tcp", 00:24:33.969 "traddr": "10.0.0.2", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "4420", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:33.969 "hdgst": false, 00:24:33.969 "ddgst": false 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 },{ 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme6", 00:24:33.969 "trtype": "tcp", 00:24:33.969 "traddr": "10.0.0.2", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "4420", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:33.969 "hdgst": false, 00:24:33.969 "ddgst": false 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 },{ 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme7", 00:24:33.969 "trtype": "tcp", 00:24:33.969 "traddr": "10.0.0.2", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "4420", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:33.969 "hdgst": false, 00:24:33.969 "ddgst": false 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 },{ 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme8", 00:24:33.969 "trtype": "tcp", 00:24:33.969 "traddr": "10.0.0.2", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "4420", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:33.969 "hdgst": false, 00:24:33.969 "ddgst": false 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 },{ 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme9", 00:24:33.969 "trtype": "tcp", 00:24:33.969 "traddr": "10.0.0.2", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "4420", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:33.969 "hdgst": false, 00:24:33.969 "ddgst": false 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 },{ 00:24:33.969 "params": { 00:24:33.969 "name": "Nvme10", 00:24:33.969 "trtype": "tcp", 00:24:33.969 "traddr": "10.0.0.2", 00:24:33.969 "adrfam": "ipv4", 00:24:33.969 "trsvcid": "4420", 00:24:33.969 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:33.969 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:33.969 "hdgst": false, 00:24:33.969 "ddgst": false 00:24:33.969 }, 00:24:33.969 "method": "bdev_nvme_attach_controller" 00:24:33.969 }' 00:24:33.969 [2024-10-07 09:45:33.514620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.969 [2024-10-07 09:45:33.579589] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.356 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:35.356 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@867 -- # return 0 00:24:35.356 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:35.356 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:35.356 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:35.356 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:35.356 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3429914 00:24:35.356 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:35.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3429914 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:35.356 09:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3429686 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:36.743 { 00:24:36.743 "params": { 00:24:36.743 "name": "Nvme$subsystem", 00:24:36.743 "trtype": "$TEST_TRANSPORT", 00:24:36.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.743 "adrfam": "ipv4", 00:24:36.743 "trsvcid": "$NVMF_PORT", 00:24:36.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.743 "hdgst": ${hdgst:-false}, 00:24:36.743 "ddgst": ${ddgst:-false} 00:24:36.743 }, 00:24:36.743 "method": "bdev_nvme_attach_controller" 00:24:36.743 } 00:24:36.743 EOF 00:24:36.743 )") 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:36.743 { 00:24:36.743 "params": { 00:24:36.743 "name": "Nvme$subsystem", 00:24:36.743 "trtype": "$TEST_TRANSPORT", 00:24:36.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.743 "adrfam": "ipv4", 00:24:36.743 "trsvcid": "$NVMF_PORT", 00:24:36.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.743 "hdgst": ${hdgst:-false}, 00:24:36.743 "ddgst": ${ddgst:-false} 00:24:36.743 }, 00:24:36.743 "method": "bdev_nvme_attach_controller" 00:24:36.743 } 00:24:36.743 EOF 00:24:36.743 )") 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:36.743 { 00:24:36.743 "params": { 00:24:36.743 "name": "Nvme$subsystem", 00:24:36.743 "trtype": "$TEST_TRANSPORT", 00:24:36.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.743 "adrfam": "ipv4", 00:24:36.743 "trsvcid": "$NVMF_PORT", 00:24:36.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.743 "hdgst": ${hdgst:-false}, 00:24:36.743 "ddgst": ${ddgst:-false} 00:24:36.743 }, 00:24:36.743 "method": "bdev_nvme_attach_controller" 00:24:36.743 } 00:24:36.743 EOF 00:24:36.743 )") 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:36.743 { 00:24:36.743 "params": { 00:24:36.743 "name": "Nvme$subsystem", 00:24:36.743 "trtype": "$TEST_TRANSPORT", 00:24:36.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.743 "adrfam": "ipv4", 00:24:36.743 "trsvcid": "$NVMF_PORT", 00:24:36.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.743 "hdgst": ${hdgst:-false}, 00:24:36.743 "ddgst": ${ddgst:-false} 00:24:36.743 }, 00:24:36.743 "method": "bdev_nvme_attach_controller" 00:24:36.743 } 00:24:36.743 EOF 00:24:36.743 )") 00:24:36.743 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:36.743 { 00:24:36.743 "params": { 00:24:36.743 "name": "Nvme$subsystem", 00:24:36.743 "trtype": "$TEST_TRANSPORT", 00:24:36.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.743 "adrfam": "ipv4", 00:24:36.743 "trsvcid": "$NVMF_PORT", 00:24:36.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.743 "hdgst": ${hdgst:-false}, 00:24:36.743 "ddgst": ${ddgst:-false} 00:24:36.743 }, 00:24:36.743 "method": "bdev_nvme_attach_controller" 00:24:36.743 } 00:24:36.743 EOF 00:24:36.743 )") 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:36.743 { 00:24:36.743 "params": { 00:24:36.743 "name": "Nvme$subsystem", 00:24:36.743 "trtype": "$TEST_TRANSPORT", 00:24:36.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.743 "adrfam": "ipv4", 00:24:36.743 "trsvcid": "$NVMF_PORT", 00:24:36.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.743 "hdgst": ${hdgst:-false}, 00:24:36.743 "ddgst": ${ddgst:-false} 00:24:36.743 }, 00:24:36.743 "method": "bdev_nvme_attach_controller" 00:24:36.743 } 00:24:36.743 EOF 00:24:36.743 )") 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:36.743 [2024-10-07 09:45:36.018594] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:36.743 [2024-10-07 09:45:36.018656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430591 ] 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:36.743 { 00:24:36.743 "params": { 00:24:36.743 "name": "Nvme$subsystem", 00:24:36.743 "trtype": "$TEST_TRANSPORT", 00:24:36.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.743 "adrfam": "ipv4", 00:24:36.743 "trsvcid": "$NVMF_PORT", 00:24:36.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.743 "hdgst": ${hdgst:-false}, 00:24:36.743 "ddgst": ${ddgst:-false} 00:24:36.743 }, 00:24:36.743 "method": "bdev_nvme_attach_controller" 00:24:36.743 } 00:24:36.743 EOF 00:24:36.743 )") 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:36.743 { 00:24:36.743 "params": { 00:24:36.743 "name": "Nvme$subsystem", 00:24:36.743 "trtype": "$TEST_TRANSPORT", 00:24:36.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.743 "adrfam": "ipv4", 00:24:36.743 "trsvcid": "$NVMF_PORT", 00:24:36.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.743 "hdgst": ${hdgst:-false}, 00:24:36.743 "ddgst": ${ddgst:-false} 00:24:36.743 }, 00:24:36.743 "method": "bdev_nvme_attach_controller" 00:24:36.743 } 00:24:36.743 EOF 00:24:36.743 )") 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:36.743 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:36.743 { 00:24:36.743 "params": { 00:24:36.743 "name": "Nvme$subsystem", 00:24:36.743 "trtype": "$TEST_TRANSPORT", 00:24:36.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.743 "adrfam": "ipv4", 00:24:36.743 "trsvcid": "$NVMF_PORT", 00:24:36.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.743 "hdgst": ${hdgst:-false}, 00:24:36.743 "ddgst": ${ddgst:-false} 00:24:36.743 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 } 00:24:36.744 EOF 00:24:36.744 )") 00:24:36.744 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:36.744 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:36.744 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:36.744 { 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme$subsystem", 00:24:36.744 "trtype": "$TEST_TRANSPORT", 00:24:36.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "$NVMF_PORT", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.744 "hdgst": ${hdgst:-false}, 00:24:36.744 "ddgst": ${ddgst:-false} 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 } 00:24:36.744 EOF 00:24:36.744 )") 00:24:36.744 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:36.744 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:24:36.744 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:24:36.744 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme1", 00:24:36.744 "trtype": "tcp", 00:24:36.744 "traddr": "10.0.0.2", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "4420", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.744 "hdgst": false, 00:24:36.744 "ddgst": false 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 },{ 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme2", 00:24:36.744 "trtype": "tcp", 00:24:36.744 "traddr": "10.0.0.2", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "4420", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:36.744 "hdgst": false, 00:24:36.744 "ddgst": false 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 },{ 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme3", 00:24:36.744 "trtype": "tcp", 00:24:36.744 "traddr": "10.0.0.2", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "4420", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:36.744 "hdgst": false, 00:24:36.744 "ddgst": false 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 },{ 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme4", 00:24:36.744 "trtype": "tcp", 00:24:36.744 "traddr": "10.0.0.2", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "4420", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:36.744 "hdgst": false, 00:24:36.744 "ddgst": false 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 },{ 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme5", 00:24:36.744 "trtype": "tcp", 00:24:36.744 "traddr": "10.0.0.2", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "4420", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:36.744 "hdgst": false, 00:24:36.744 "ddgst": false 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 },{ 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme6", 00:24:36.744 "trtype": "tcp", 00:24:36.744 "traddr": "10.0.0.2", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "4420", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:36.744 "hdgst": false, 00:24:36.744 "ddgst": false 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 },{ 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme7", 00:24:36.744 "trtype": "tcp", 00:24:36.744 "traddr": "10.0.0.2", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "4420", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:36.744 "hdgst": false, 00:24:36.744 "ddgst": false 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 },{ 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme8", 00:24:36.744 "trtype": "tcp", 00:24:36.744 "traddr": "10.0.0.2", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "4420", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:36.744 "hdgst": false, 00:24:36.744 "ddgst": false 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 },{ 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme9", 00:24:36.744 "trtype": "tcp", 00:24:36.744 "traddr": "10.0.0.2", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "4420", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:36.744 "hdgst": false, 00:24:36.744 "ddgst": false 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 },{ 00:24:36.744 "params": { 00:24:36.744 "name": "Nvme10", 00:24:36.744 "trtype": "tcp", 00:24:36.744 "traddr": "10.0.0.2", 00:24:36.744 "adrfam": "ipv4", 00:24:36.744 "trsvcid": "4420", 00:24:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:36.744 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:36.744 "hdgst": false, 00:24:36.744 "ddgst": false 00:24:36.744 }, 00:24:36.744 "method": "bdev_nvme_attach_controller" 00:24:36.744 }' 00:24:36.744 [2024-10-07 09:45:36.101128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.744 [2024-10-07 09:45:36.165144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.128 Running I/O for 1 seconds... 00:24:39.069 1863.00 IOPS, 116.44 MiB/s 00:24:39.069 Latency(us) 00:24:39.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.069 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.069 Verification LBA range: start 0x0 length 0x400 00:24:39.069 Nvme1n1 : 1.12 235.35 14.71 0.00 0.00 268688.07 2525.87 244667.73 00:24:39.069 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.069 Verification LBA range: start 0x0 length 0x400 00:24:39.069 Nvme2n1 : 1.13 227.39 14.21 0.00 0.00 273900.80 20206.93 248162.99 00:24:39.069 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.069 Verification LBA range: start 0x0 length 0x400 00:24:39.069 Nvme3n1 : 1.10 231.94 14.50 0.00 0.00 258368.64 19005.44 244667.73 00:24:39.069 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.069 Verification LBA range: start 0x0 length 0x400 00:24:39.069 Nvme4n1 : 1.07 238.21 14.89 0.00 0.00 251374.29 17039.36 255153.49 00:24:39.069 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.069 Verification LBA range: start 0x0 length 0x400 00:24:39.069 Nvme5n1 : 1.08 236.01 14.75 0.00 0.00 249437.01 30583.47 251658.24 00:24:39.069 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.069 Verification LBA range: start 0x0 length 0x400 00:24:39.069 Nvme6n1 : 1.11 231.23 14.45 0.00 0.00 250318.08 35826.35 222822.40 00:24:39.069 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.069 Verification LBA range: start 0x0 length 0x400 00:24:39.069 Nvme7n1 : 1.15 277.90 17.37 0.00 0.00 204714.24 12288.00 253405.87 00:24:39.069 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.069 Verification LBA range: start 0x0 length 0x400 00:24:39.069 Nvme8n1 : 1.18 270.24 16.89 0.00 0.00 207905.79 14090.24 244667.73 00:24:39.069 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.069 Verification LBA range: start 0x0 length 0x400 00:24:39.069 Nvme9n1 : 1.19 268.08 16.75 0.00 0.00 206108.76 13653.33 246415.36 00:24:39.069 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:39.069 Verification LBA range: start 0x0 length 0x400 00:24:39.069 Nvme10n1 : 1.19 268.90 16.81 0.00 0.00 201328.47 6198.61 262144.00 00:24:39.069 =================================================================================================================== 00:24:39.069 Total : 2485.24 155.33 0.00 0.00 234372.44 2525.87 262144.00 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:39.330 rmmod nvme_tcp 00:24:39.330 rmmod nvme_fabrics 00:24:39.330 rmmod nvme_keyring 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 3429686 ']' 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 3429686 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' -z 3429686 ']' 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # kill -0 3429686 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # uname 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3429686 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3429686' 00:24:39.330 killing process with pid 3429686 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # kill 3429686 00:24:39.330 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@977 -- # wait 3429686 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.591 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.137 00:24:42.137 real 0m16.872s 00:24:42.137 user 0m33.794s 00:24:42.137 sys 0m6.878s 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # xtrace_disable 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:42.137 ************************************ 00:24:42.137 END TEST nvmf_shutdown_tc1 00:24:42.137 ************************************ 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1110 -- # xtrace_disable 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:42.137 ************************************ 00:24:42.137 START TEST nvmf_shutdown_tc2 00:24:42.137 ************************************ 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # nvmf_shutdown_tc2 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:42.137 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:42.137 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:42.137 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:42.138 Found net devices under 0000:31:00.0: cvl_0_0 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:42.138 Found net devices under 0000:31:00.1: cvl_0_1 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:42.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:24:42.138 00:24:42.138 --- 10.0.0.2 ping statistics --- 00:24:42.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.138 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:24:42.138 00:24:42.138 --- 10.0.0.1 ping statistics --- 00:24:42.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.138 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3431704 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3431704 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # '[' -z 3431704 ']' 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:42.138 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.138 [2024-10-07 09:45:41.759501] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:42.138 [2024-10-07 09:45:41.759568] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.399 [2024-10-07 09:45:41.848305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:42.399 [2024-10-07 09:45:41.908408] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.399 [2024-10-07 09:45:41.908441] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.399 [2024-10-07 09:45:41.908447] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.399 [2024-10-07 09:45:41.908452] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.399 [2024-10-07 09:45:41.908456] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.399 [2024-10-07 09:45:41.909905] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.399 [2024-10-07 09:45:41.910056] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.399 [2024-10-07 09:45:41.910171] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.399 [2024-10-07 09:45:41.910172] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:24:42.969 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:42.969 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@867 -- # return 0 00:24:42.969 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:42.969 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@733 -- # xtrace_disable 00:24:42.969 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.969 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.969 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.970 [2024-10-07 09:45:42.606048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:42.970 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:43.230 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:43.230 Malloc1 00:24:43.230 [2024-10-07 09:45:42.704791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.230 Malloc2 00:24:43.230 Malloc3 00:24:43.230 Malloc4 00:24:43.230 Malloc5 00:24:43.230 Malloc6 00:24:43.491 Malloc7 00:24:43.491 Malloc8 00:24:43.491 Malloc9 00:24:43.491 Malloc10 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@733 -- # xtrace_disable 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3432088 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3432088 /var/tmp/bdevperf.sock 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # '[' -z 3432088 ']' 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:43.491 { 00:24:43.491 "params": { 00:24:43.491 "name": "Nvme$subsystem", 00:24:43.491 "trtype": "$TEST_TRANSPORT", 00:24:43.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.491 "adrfam": "ipv4", 00:24:43.491 "trsvcid": "$NVMF_PORT", 00:24:43.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.491 "hdgst": ${hdgst:-false}, 00:24:43.491 "ddgst": ${ddgst:-false} 00:24:43.491 }, 00:24:43.491 "method": "bdev_nvme_attach_controller" 00:24:43.491 } 00:24:43.491 EOF 00:24:43.491 )") 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:43.491 { 00:24:43.491 "params": { 00:24:43.491 "name": "Nvme$subsystem", 00:24:43.491 "trtype": "$TEST_TRANSPORT", 00:24:43.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.491 "adrfam": "ipv4", 00:24:43.491 "trsvcid": "$NVMF_PORT", 00:24:43.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.491 "hdgst": ${hdgst:-false}, 00:24:43.491 "ddgst": ${ddgst:-false} 00:24:43.491 }, 00:24:43.491 "method": "bdev_nvme_attach_controller" 00:24:43.491 } 00:24:43.491 EOF 00:24:43.491 )") 00:24:43.491 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:43.492 { 00:24:43.492 "params": { 00:24:43.492 "name": "Nvme$subsystem", 00:24:43.492 "trtype": "$TEST_TRANSPORT", 00:24:43.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.492 "adrfam": "ipv4", 00:24:43.492 "trsvcid": "$NVMF_PORT", 00:24:43.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.492 "hdgst": ${hdgst:-false}, 00:24:43.492 "ddgst": ${ddgst:-false} 00:24:43.492 }, 00:24:43.492 "method": "bdev_nvme_attach_controller" 00:24:43.492 } 00:24:43.492 EOF 00:24:43.492 )") 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:43.492 { 00:24:43.492 "params": { 00:24:43.492 "name": "Nvme$subsystem", 00:24:43.492 "trtype": "$TEST_TRANSPORT", 00:24:43.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.492 "adrfam": "ipv4", 00:24:43.492 "trsvcid": "$NVMF_PORT", 00:24:43.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.492 "hdgst": ${hdgst:-false}, 00:24:43.492 "ddgst": ${ddgst:-false} 00:24:43.492 }, 00:24:43.492 "method": "bdev_nvme_attach_controller" 00:24:43.492 } 00:24:43.492 EOF 00:24:43.492 )") 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:43.492 { 00:24:43.492 "params": { 00:24:43.492 "name": "Nvme$subsystem", 00:24:43.492 "trtype": "$TEST_TRANSPORT", 00:24:43.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.492 "adrfam": "ipv4", 00:24:43.492 "trsvcid": "$NVMF_PORT", 00:24:43.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.492 "hdgst": ${hdgst:-false}, 00:24:43.492 "ddgst": ${ddgst:-false} 00:24:43.492 }, 00:24:43.492 "method": "bdev_nvme_attach_controller" 00:24:43.492 } 00:24:43.492 EOF 00:24:43.492 )") 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:43.492 { 00:24:43.492 "params": { 00:24:43.492 "name": "Nvme$subsystem", 00:24:43.492 "trtype": "$TEST_TRANSPORT", 00:24:43.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.492 "adrfam": "ipv4", 00:24:43.492 "trsvcid": "$NVMF_PORT", 00:24:43.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.492 "hdgst": ${hdgst:-false}, 00:24:43.492 "ddgst": ${ddgst:-false} 00:24:43.492 }, 00:24:43.492 "method": "bdev_nvme_attach_controller" 00:24:43.492 } 00:24:43.492 EOF 00:24:43.492 )") 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:43.492 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:43.492 { 00:24:43.492 "params": { 00:24:43.492 "name": "Nvme$subsystem", 00:24:43.492 "trtype": "$TEST_TRANSPORT", 00:24:43.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.492 "adrfam": "ipv4", 00:24:43.492 "trsvcid": "$NVMF_PORT", 00:24:43.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.492 "hdgst": ${hdgst:-false}, 00:24:43.492 "ddgst": ${ddgst:-false} 00:24:43.492 }, 00:24:43.492 "method": "bdev_nvme_attach_controller" 00:24:43.492 } 00:24:43.492 EOF 00:24:43.492 )") 00:24:43.752 [2024-10-07 09:45:43.152763] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:43.752 [2024-10-07 09:45:43.152818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432088 ] 00:24:43.752 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:43.752 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:43.752 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:43.752 { 00:24:43.752 "params": { 00:24:43.752 "name": "Nvme$subsystem", 00:24:43.752 "trtype": "$TEST_TRANSPORT", 00:24:43.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.752 "adrfam": "ipv4", 00:24:43.752 "trsvcid": "$NVMF_PORT", 00:24:43.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.752 "hdgst": ${hdgst:-false}, 00:24:43.752 "ddgst": ${ddgst:-false} 00:24:43.752 }, 00:24:43.752 "method": "bdev_nvme_attach_controller" 00:24:43.752 } 00:24:43.752 EOF 00:24:43.752 )") 00:24:43.752 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:43.752 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:43.752 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:43.752 { 00:24:43.752 "params": { 00:24:43.752 "name": "Nvme$subsystem", 00:24:43.752 "trtype": "$TEST_TRANSPORT", 00:24:43.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.752 "adrfam": "ipv4", 00:24:43.752 "trsvcid": "$NVMF_PORT", 00:24:43.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.752 "hdgst": ${hdgst:-false}, 00:24:43.752 "ddgst": ${ddgst:-false} 00:24:43.752 }, 00:24:43.752 "method": "bdev_nvme_attach_controller" 00:24:43.752 } 00:24:43.752 EOF 00:24:43.752 )") 00:24:43.752 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:43.752 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:43.752 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:43.752 { 00:24:43.752 "params": { 00:24:43.753 "name": "Nvme$subsystem", 00:24:43.753 "trtype": "$TEST_TRANSPORT", 00:24:43.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "$NVMF_PORT", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.753 "hdgst": ${hdgst:-false}, 00:24:43.753 "ddgst": ${ddgst:-false} 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 } 00:24:43.753 EOF 00:24:43.753 )") 00:24:43.753 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:43.753 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:24:43.753 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:24:43.753 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:24:43.753 "params": { 00:24:43.753 "name": "Nvme1", 00:24:43.753 "trtype": "tcp", 00:24:43.753 "traddr": "10.0.0.2", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "4420", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.753 "hdgst": false, 00:24:43.753 "ddgst": false 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 },{ 00:24:43.753 "params": { 00:24:43.753 "name": "Nvme2", 00:24:43.753 "trtype": "tcp", 00:24:43.753 "traddr": "10.0.0.2", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "4420", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:43.753 "hdgst": false, 00:24:43.753 "ddgst": false 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 },{ 00:24:43.753 "params": { 00:24:43.753 "name": "Nvme3", 00:24:43.753 "trtype": "tcp", 00:24:43.753 "traddr": "10.0.0.2", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "4420", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:43.753 "hdgst": false, 00:24:43.753 "ddgst": false 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 },{ 00:24:43.753 "params": { 00:24:43.753 "name": "Nvme4", 00:24:43.753 "trtype": "tcp", 00:24:43.753 "traddr": "10.0.0.2", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "4420", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:43.753 "hdgst": false, 00:24:43.753 "ddgst": false 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 },{ 00:24:43.753 "params": { 00:24:43.753 "name": "Nvme5", 00:24:43.753 "trtype": "tcp", 00:24:43.753 "traddr": "10.0.0.2", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "4420", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:43.753 "hdgst": false, 00:24:43.753 "ddgst": false 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 },{ 00:24:43.753 "params": { 00:24:43.753 "name": "Nvme6", 00:24:43.753 "trtype": "tcp", 00:24:43.753 "traddr": "10.0.0.2", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "4420", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:43.753 "hdgst": false, 00:24:43.753 "ddgst": false 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 },{ 00:24:43.753 "params": { 00:24:43.753 "name": "Nvme7", 00:24:43.753 "trtype": "tcp", 00:24:43.753 "traddr": "10.0.0.2", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "4420", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:43.753 "hdgst": false, 00:24:43.753 "ddgst": false 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 },{ 00:24:43.753 "params": { 00:24:43.753 "name": "Nvme8", 00:24:43.753 "trtype": "tcp", 00:24:43.753 "traddr": "10.0.0.2", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "4420", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:43.753 "hdgst": false, 00:24:43.753 "ddgst": false 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 },{ 00:24:43.753 "params": { 00:24:43.753 "name": "Nvme9", 00:24:43.753 "trtype": "tcp", 00:24:43.753 "traddr": "10.0.0.2", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "4420", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:43.753 "hdgst": false, 00:24:43.753 "ddgst": false 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 },{ 00:24:43.753 "params": { 00:24:43.753 "name": "Nvme10", 00:24:43.753 "trtype": "tcp", 00:24:43.753 "traddr": "10.0.0.2", 00:24:43.753 "adrfam": "ipv4", 00:24:43.753 "trsvcid": "4420", 00:24:43.753 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:43.753 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:43.753 "hdgst": false, 00:24:43.753 "ddgst": false 00:24:43.753 }, 00:24:43.753 "method": "bdev_nvme_attach_controller" 00:24:43.753 }' 00:24:43.753 [2024-10-07 09:45:43.232874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.753 [2024-10-07 09:45:43.298519] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.663 Running I/O for 10 seconds... 00:24:45.663 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:45.663 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@867 -- # return 0 00:24:45.663 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:45.663 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:45.663 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:45.663 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:45.923 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:45.923 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:45.923 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:45.923 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:45.923 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:45.923 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.923 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:45.923 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=72 00:24:45.923 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:24:45.923 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3432088 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' -z 3432088 ']' 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # kill -0 3432088 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # uname 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3432088 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3432088' 00:24:46.185 killing process with pid 3432088 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # kill 3432088 00:24:46.185 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@977 -- # wait 3432088 00:24:46.446 2309.00 IOPS, 144.31 MiB/s Received shutdown signal, test time was about 1.020323 seconds 00:24:46.446 00:24:46.446 Latency(us) 00:24:46.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.446 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.446 Verification LBA range: start 0x0 length 0x400 00:24:46.446 Nvme1n1 : 1.02 256.02 16.00 0.00 0.00 237899.03 19114.67 249910.61 00:24:46.446 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.446 Verification LBA range: start 0x0 length 0x400 00:24:46.446 Nvme2n1 : 0.95 202.44 12.65 0.00 0.00 305843.20 32549.55 235929.60 00:24:46.446 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.446 Verification LBA range: start 0x0 length 0x400 00:24:46.446 Nvme3n1 : 0.97 262.65 16.42 0.00 0.00 231034.88 15837.87 228939.09 00:24:46.446 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.446 Verification LBA range: start 0x0 length 0x400 00:24:46.446 Nvme4n1 : 0.97 263.34 16.46 0.00 0.00 225617.92 19988.48 256901.12 00:24:46.446 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.446 Verification LBA range: start 0x0 length 0x400 00:24:46.446 Nvme5n1 : 0.96 199.97 12.50 0.00 0.00 290537.24 37573.97 256901.12 00:24:46.446 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.446 Verification LBA range: start 0x0 length 0x400 00:24:46.446 Nvme6n1 : 0.96 200.56 12.53 0.00 0.00 283147.95 19551.57 279620.27 00:24:46.446 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.446 Verification LBA range: start 0x0 length 0x400 00:24:46.446 Nvme7n1 : 0.95 203.08 12.69 0.00 0.00 272659.06 13271.04 251658.24 00:24:46.446 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.446 Verification LBA range: start 0x0 length 0x400 00:24:46.446 Nvme8n1 : 0.97 265.27 16.58 0.00 0.00 204167.79 8792.75 258648.75 00:24:46.446 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.446 Verification LBA range: start 0x0 length 0x400 00:24:46.446 Nvme9n1 : 0.98 260.58 16.29 0.00 0.00 204029.87 18240.85 256901.12 00:24:46.446 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:46.446 Verification LBA range: start 0x0 length 0x400 00:24:46.446 Nvme10n1 : 0.97 269.86 16.87 0.00 0.00 190911.79 8410.45 230686.72 00:24:46.446 =================================================================================================================== 00:24:46.446 Total : 2383.75 148.98 0.00 0.00 239646.21 8410.45 279620.27 00:24:46.446 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:47.457 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3431704 00:24:47.457 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:47.457 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:47.458 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:47.458 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:47.458 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:47.458 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:47.458 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:24:47.458 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.458 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:24:47.458 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.458 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.458 rmmod nvme_tcp 00:24:47.746 rmmod nvme_fabrics 00:24:47.746 rmmod nvme_keyring 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 3431704 ']' 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 3431704 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' -z 3431704 ']' 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # kill -0 3431704 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # uname 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3431704 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3431704' 00:24:47.746 killing process with pid 3431704 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # kill 3431704 00:24:47.746 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@977 -- # wait 3431704 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.013 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.928 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.928 00:24:49.928 real 0m8.250s 00:24:49.928 user 0m25.333s 00:24:49.928 sys 0m1.322s 00:24:49.928 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # xtrace_disable 00:24:49.928 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.928 ************************************ 00:24:49.928 END TEST nvmf_shutdown_tc2 00:24:49.928 ************************************ 00:24:50.189 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:50.189 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:24:50.189 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1110 -- # xtrace_disable 00:24:50.189 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:50.189 ************************************ 00:24:50.189 START TEST nvmf_shutdown_tc3 00:24:50.189 ************************************ 00:24:50.189 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # nvmf_shutdown_tc3 00:24:50.189 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:50.190 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:50.190 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:50.190 Found net devices under 0000:31:00.0: cvl_0_0 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:50.190 Found net devices under 0000:31:00.1: cvl_0_1 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.190 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.191 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:50.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:24:50.453 00:24:50.453 --- 10.0.0.2 ping statistics --- 00:24:50.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.453 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:24:50.453 00:24:50.453 --- 10.0.0.1 ping statistics --- 00:24:50.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.453 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:50.453 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3433553 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3433553 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # '[' -z 3433553 ']' 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:50.453 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:50.453 [2024-10-07 09:45:50.110085] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:50.453 [2024-10-07 09:45:50.110149] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.715 [2024-10-07 09:45:50.201669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.715 [2024-10-07 09:45:50.261345] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.715 [2024-10-07 09:45:50.261383] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.715 [2024-10-07 09:45:50.261388] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.715 [2024-10-07 09:45:50.261393] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.715 [2024-10-07 09:45:50.261398] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.715 [2024-10-07 09:45:50.262723] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.715 [2024-10-07 09:45:50.262972] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.715 [2024-10-07 09:45:50.263124] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.715 [2024-10-07 09:45:50.263125] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:24:51.286 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:51.286 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@867 -- # return 0 00:24:51.286 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:51.286 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@733 -- # xtrace_disable 00:24:51.286 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:51.547 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:51.548 [2024-10-07 09:45:50.954281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.548 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:51.548 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.548 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:51.548 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.548 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:51.548 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:51.548 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:51.548 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:51.548 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:51.548 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:51.548 Malloc1 00:24:51.548 [2024-10-07 09:45:51.053019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.548 Malloc2 00:24:51.548 Malloc3 00:24:51.548 Malloc4 00:24:51.548 Malloc5 00:24:51.810 Malloc6 00:24:51.810 Malloc7 00:24:51.810 Malloc8 00:24:51.810 Malloc9 00:24:51.810 Malloc10 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@733 -- # xtrace_disable 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3433893 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3433893 /var/tmp/bdevperf.sock 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # '[' -z 3433893 ']' 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:51.810 { 00:24:51.810 "params": { 00:24:51.810 "name": "Nvme$subsystem", 00:24:51.810 "trtype": "$TEST_TRANSPORT", 00:24:51.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.810 "adrfam": "ipv4", 00:24:51.810 "trsvcid": "$NVMF_PORT", 00:24:51.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.810 "hdgst": ${hdgst:-false}, 00:24:51.810 "ddgst": ${ddgst:-false} 00:24:51.810 }, 00:24:51.810 "method": "bdev_nvme_attach_controller" 00:24:51.810 } 00:24:51.810 EOF 00:24:51.810 )") 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:51.810 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:51.810 { 00:24:51.810 "params": { 00:24:51.810 "name": "Nvme$subsystem", 00:24:51.810 "trtype": "$TEST_TRANSPORT", 00:24:51.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.810 "adrfam": "ipv4", 00:24:51.810 "trsvcid": "$NVMF_PORT", 00:24:51.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.810 "hdgst": ${hdgst:-false}, 00:24:51.810 "ddgst": ${ddgst:-false} 00:24:51.810 }, 00:24:51.810 "method": "bdev_nvme_attach_controller" 00:24:51.810 } 00:24:51.810 EOF 00:24:51.810 )") 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:52.072 { 00:24:52.072 "params": { 00:24:52.072 "name": "Nvme$subsystem", 00:24:52.072 "trtype": "$TEST_TRANSPORT", 00:24:52.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.072 "adrfam": "ipv4", 00:24:52.072 "trsvcid": "$NVMF_PORT", 00:24:52.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.072 "hdgst": ${hdgst:-false}, 00:24:52.072 "ddgst": ${ddgst:-false} 00:24:52.072 }, 00:24:52.072 "method": "bdev_nvme_attach_controller" 00:24:52.072 } 00:24:52.072 EOF 00:24:52.072 )") 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:52.072 { 00:24:52.072 "params": { 00:24:52.072 "name": "Nvme$subsystem", 00:24:52.072 "trtype": "$TEST_TRANSPORT", 00:24:52.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.072 "adrfam": "ipv4", 00:24:52.072 "trsvcid": "$NVMF_PORT", 00:24:52.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.072 "hdgst": ${hdgst:-false}, 00:24:52.072 "ddgst": ${ddgst:-false} 00:24:52.072 }, 00:24:52.072 "method": "bdev_nvme_attach_controller" 00:24:52.072 } 00:24:52.072 EOF 00:24:52.072 )") 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:52.072 { 00:24:52.072 "params": { 00:24:52.072 "name": "Nvme$subsystem", 00:24:52.072 "trtype": "$TEST_TRANSPORT", 00:24:52.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.072 "adrfam": "ipv4", 00:24:52.072 "trsvcid": "$NVMF_PORT", 00:24:52.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.072 "hdgst": ${hdgst:-false}, 00:24:52.072 "ddgst": ${ddgst:-false} 00:24:52.072 }, 00:24:52.072 "method": "bdev_nvme_attach_controller" 00:24:52.072 } 00:24:52.072 EOF 00:24:52.072 )") 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:52.072 { 00:24:52.072 "params": { 00:24:52.072 "name": "Nvme$subsystem", 00:24:52.072 "trtype": "$TEST_TRANSPORT", 00:24:52.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.072 "adrfam": "ipv4", 00:24:52.072 "trsvcid": "$NVMF_PORT", 00:24:52.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.072 "hdgst": ${hdgst:-false}, 00:24:52.072 "ddgst": ${ddgst:-false} 00:24:52.072 }, 00:24:52.072 "method": "bdev_nvme_attach_controller" 00:24:52.072 } 00:24:52.072 EOF 00:24:52.072 )") 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:52.072 [2024-10-07 09:45:51.509229] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:52.072 [2024-10-07 09:45:51.509283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433893 ] 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:52.072 { 00:24:52.072 "params": { 00:24:52.072 "name": "Nvme$subsystem", 00:24:52.072 "trtype": "$TEST_TRANSPORT", 00:24:52.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.072 "adrfam": "ipv4", 00:24:52.072 "trsvcid": "$NVMF_PORT", 00:24:52.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.072 "hdgst": ${hdgst:-false}, 00:24:52.072 "ddgst": ${ddgst:-false} 00:24:52.072 }, 00:24:52.072 "method": "bdev_nvme_attach_controller" 00:24:52.072 } 00:24:52.072 EOF 00:24:52.072 )") 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:52.072 { 00:24:52.072 "params": { 00:24:52.072 "name": "Nvme$subsystem", 00:24:52.072 "trtype": "$TEST_TRANSPORT", 00:24:52.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.072 "adrfam": "ipv4", 00:24:52.072 "trsvcid": "$NVMF_PORT", 00:24:52.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.072 "hdgst": ${hdgst:-false}, 00:24:52.072 "ddgst": ${ddgst:-false} 00:24:52.072 }, 00:24:52.072 "method": "bdev_nvme_attach_controller" 00:24:52.072 } 00:24:52.072 EOF 00:24:52.072 )") 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:52.072 { 00:24:52.072 "params": { 00:24:52.072 "name": "Nvme$subsystem", 00:24:52.072 "trtype": "$TEST_TRANSPORT", 00:24:52.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.072 "adrfam": "ipv4", 00:24:52.072 "trsvcid": "$NVMF_PORT", 00:24:52.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.072 "hdgst": ${hdgst:-false}, 00:24:52.072 "ddgst": ${ddgst:-false} 00:24:52.072 }, 00:24:52.072 "method": "bdev_nvme_attach_controller" 00:24:52.072 } 00:24:52.072 EOF 00:24:52.072 )") 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:52.072 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:52.072 { 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme$subsystem", 00:24:52.073 "trtype": "$TEST_TRANSPORT", 00:24:52.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "$NVMF_PORT", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.073 "hdgst": ${hdgst:-false}, 00:24:52.073 "ddgst": ${ddgst:-false} 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 } 00:24:52.073 EOF 00:24:52.073 )") 00:24:52.073 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:52.073 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:24:52.073 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:24:52.073 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme1", 00:24:52.073 "trtype": "tcp", 00:24:52.073 "traddr": "10.0.0.2", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "4420", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.073 "hdgst": false, 00:24:52.073 "ddgst": false 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 },{ 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme2", 00:24:52.073 "trtype": "tcp", 00:24:52.073 "traddr": "10.0.0.2", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "4420", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:52.073 "hdgst": false, 00:24:52.073 "ddgst": false 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 },{ 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme3", 00:24:52.073 "trtype": "tcp", 00:24:52.073 "traddr": "10.0.0.2", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "4420", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:52.073 "hdgst": false, 00:24:52.073 "ddgst": false 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 },{ 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme4", 00:24:52.073 "trtype": "tcp", 00:24:52.073 "traddr": "10.0.0.2", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "4420", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:52.073 "hdgst": false, 00:24:52.073 "ddgst": false 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 },{ 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme5", 00:24:52.073 "trtype": "tcp", 00:24:52.073 "traddr": "10.0.0.2", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "4420", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:52.073 "hdgst": false, 00:24:52.073 "ddgst": false 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 },{ 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme6", 00:24:52.073 "trtype": "tcp", 00:24:52.073 "traddr": "10.0.0.2", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "4420", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:52.073 "hdgst": false, 00:24:52.073 "ddgst": false 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 },{ 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme7", 00:24:52.073 "trtype": "tcp", 00:24:52.073 "traddr": "10.0.0.2", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "4420", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:52.073 "hdgst": false, 00:24:52.073 "ddgst": false 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 },{ 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme8", 00:24:52.073 "trtype": "tcp", 00:24:52.073 "traddr": "10.0.0.2", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "4420", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:52.073 "hdgst": false, 00:24:52.073 "ddgst": false 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 },{ 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme9", 00:24:52.073 "trtype": "tcp", 00:24:52.073 "traddr": "10.0.0.2", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "4420", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:52.073 "hdgst": false, 00:24:52.073 "ddgst": false 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 },{ 00:24:52.073 "params": { 00:24:52.073 "name": "Nvme10", 00:24:52.073 "trtype": "tcp", 00:24:52.073 "traddr": "10.0.0.2", 00:24:52.073 "adrfam": "ipv4", 00:24:52.073 "trsvcid": "4420", 00:24:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:52.073 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:52.073 "hdgst": false, 00:24:52.073 "ddgst": false 00:24:52.073 }, 00:24:52.073 "method": "bdev_nvme_attach_controller" 00:24:52.073 }' 00:24:52.073 [2024-10-07 09:45:51.589291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.073 [2024-10-07 09:45:51.654521] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.457 Running I/O for 10 seconds... 00:24:53.457 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:53.457 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@867 -- # return 0 00:24:53.457 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:53.457 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:53.457 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:53.718 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:53.978 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:53.978 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:53.978 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:53.978 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:53.978 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:53.978 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:53.978 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:54.238 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:54.238 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:54.238 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:54.238 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:54.238 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3433553 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' -z 3433553 ']' 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # kill -0 3433553 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # uname 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:24:54.514 09:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3433553 00:24:54.514 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:24:54.514 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:24:54.514 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3433553' 00:24:54.514 killing process with pid 3433553 00:24:54.514 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # kill 3433553 00:24:54.515 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # wait 3433553 00:24:54.515 [2024-10-07 09:45:54.015536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.015907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e090 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.515 [2024-10-07 09:45:54.016861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.016997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ffe0 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.017999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.516 [2024-10-07 09:45:54.018181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.018286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e560 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.517 [2024-10-07 09:45:54.019541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.019546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.019550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.019555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.019560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ea30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.020974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.020986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.020991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209f760 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.021998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.518 [2024-10-07 09:45:54.022094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209fc30 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.022996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.519 [2024-10-07 09:45:54.023054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.023060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.023064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.023069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.023074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.023078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.023083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.023088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.023093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.023097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.027371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cf550 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.027500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6cc10 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.027588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6bbf0 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.027683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d0690 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.027774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8c610 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.027862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1097350 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.027947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.027987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.027997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.028004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.028011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc72760 is same with the state(6) to be set 00:24:54.520 [2024-10-07 09:45:54.028036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.028044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.028053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.028060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.028069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.028076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.028084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.520 [2024-10-07 09:45:54.028091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.520 [2024-10-07 09:45:54.028099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc761d0 is same with the state(6) to be set 00:24:54.521 [2024-10-07 09:45:54.028121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.521 [2024-10-07 09:45:54.028129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.028137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.521 [2024-10-07 09:45:54.028144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.028152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.521 [2024-10-07 09:45:54.028160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.028168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.521 [2024-10-07 09:45:54.028175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.028182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc736f0 is same with the state(6) to be set 00:24:54.521 [2024-10-07 09:45:54.028913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.028937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.028953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.028961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.028971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.028982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.028991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.028999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.521 [2024-10-07 09:45:54.029508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.521 [2024-10-07 09:45:54.029518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.029983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.029993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.030000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.030009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.030016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.522 [2024-10-07 09:45:54.030043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:54.522 [2024-10-07 09:45:54.030084] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x111ae50 was disconnected and freed. reset controller. 00:24:54.522 [2024-10-07 09:45:54.032378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f770 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230fc60 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.032959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230fc60 is same with the state(6) to be set 00:24:54.522 [2024-10-07 09:45:54.034763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.522 [2024-10-07 09:45:54.034783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.034983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.034991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.035348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.035358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.523 [2024-10-07 09:45:54.042168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.523 [2024-10-07 09:45:54.042209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.042712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:54.524 [2024-10-07 09:45:54.042793] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1079630 was disconnected and freed. reset controller. 00:24:54.524 [2024-10-07 09:45:54.042927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.524 [2024-10-07 09:45:54.042942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.524 [2024-10-07 09:45:54.042958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.524 [2024-10-07 09:45:54.042977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.524 [2024-10-07 09:45:54.042992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.524 [2024-10-07 09:45:54.042999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d0480 is same with the state(6) to be set 00:24:54.524 [2024-10-07 09:45:54.043015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cf550 (9): Bad file descriptor 00:24:54.524 [2024-10-07 09:45:54.043029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6cc10 (9): Bad file descriptor 00:24:54.524 [2024-10-07 09:45:54.043044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6bbf0 (9): Bad file descriptor 00:24:54.524 [2024-10-07 09:45:54.043064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d0690 (9): Bad file descriptor 00:24:54.524 [2024-10-07 09:45:54.043077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8c610 (9): Bad file descriptor 00:24:54.524 [2024-10-07 09:45:54.043097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1097350 (9): Bad file descriptor 00:24:54.524 [2024-10-07 09:45:54.043111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc72760 (9): Bad file descriptor 00:24:54.524 [2024-10-07 09:45:54.043128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc761d0 (9): Bad file descriptor 00:24:54.524 [2024-10-07 09:45:54.043144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc736f0 (9): Bad file descriptor 00:24:54.524 [2024-10-07 09:45:54.043363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.524 [2024-10-07 09:45:54.043377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.043984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.043993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.044000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.044010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.044017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.044026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.044034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.044043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.044050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.044059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.044067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.525 [2024-10-07 09:45:54.044076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.525 [2024-10-07 09:45:54.044083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.044475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.044527] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1119960 was disconnected and freed. reset controller. 00:24:54.526 [2024-10-07 09:45:54.048599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:54.526 [2024-10-07 09:45:54.048635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:54.526 [2024-10-07 09:45:54.048647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:54.526 [2024-10-07 09:45:54.048660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d0480 (9): Bad file descriptor 00:24:54.526 [2024-10-07 09:45:54.049477] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:54.526 [2024-10-07 09:45:54.049931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.526 [2024-10-07 09:45:54.049970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1097350 with addr=10.0.0.2, port=4420 00:24:54.526 [2024-10-07 09:45:54.049983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1097350 is same with the state(6) to be set 00:24:54.526 [2024-10-07 09:45:54.050307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.526 [2024-10-07 09:45:54.050319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6bbf0 with addr=10.0.0.2, port=4420 00:24:54.526 [2024-10-07 09:45:54.050327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6bbf0 is same with the state(6) to be set 00:24:54.526 [2024-10-07 09:45:54.050400] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:54.526 [2024-10-07 09:45:54.050446] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:54.526 [2024-10-07 09:45:54.050485] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:54.526 [2024-10-07 09:45:54.050822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.050836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.050852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.050861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.050870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.050879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.050888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.050896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.050906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.050914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.050924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.050931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.050941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.050948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.050958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.050966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.050975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.050983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.050992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.526 [2024-10-07 09:45:54.051000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.526 [2024-10-07 09:45:54.051009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.527 [2024-10-07 09:45:54.051689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.527 [2024-10-07 09:45:54.051696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.051938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.051946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1076c70 is same with the state(6) to be set 00:24:54.528 [2024-10-07 09:45:54.051988] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1076c70 was disconnected and freed. reset controller. 00:24:54.528 [2024-10-07 09:45:54.052030] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:54.528 [2024-10-07 09:45:54.052237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.528 [2024-10-07 09:45:54.052251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d0480 with addr=10.0.0.2, port=4420 00:24:54.528 [2024-10-07 09:45:54.052259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d0480 is same with the state(6) to be set 00:24:54.528 [2024-10-07 09:45:54.052271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1097350 (9): Bad file descriptor 00:24:54.528 [2024-10-07 09:45:54.052282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6bbf0 (9): Bad file descriptor 00:24:54.528 [2024-10-07 09:45:54.052358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.528 [2024-10-07 09:45:54.052664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.528 [2024-10-07 09:45:54.052672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.052990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.052997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.529 [2024-10-07 09:45:54.053350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.529 [2024-10-07 09:45:54.053357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.053367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.053374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.053384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.053391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.053400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.053407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.053417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.053424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.053434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.053441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.053450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.053458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.053466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c340 is same with the state(6) to be set 00:24:54.530 [2024-10-07 09:45:54.053508] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x111c340 was disconnected and freed. reset controller. 00:24:54.530 [2024-10-07 09:45:54.054762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:54.530 [2024-10-07 09:45:54.054791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d0480 (9): Bad file descriptor 00:24:54.530 [2024-10-07 09:45:54.054802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:54.530 [2024-10-07 09:45:54.054810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:54.530 [2024-10-07 09:45:54.054820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:54.530 [2024-10-07 09:45:54.054834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:54.530 [2024-10-07 09:45:54.054842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:54.530 [2024-10-07 09:45:54.054850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:54.530 [2024-10-07 09:45:54.054895] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:54.530 [2024-10-07 09:45:54.056165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.530 [2024-10-07 09:45:54.056180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.530 [2024-10-07 09:45:54.056210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:54.530 [2024-10-07 09:45:54.056554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.530 [2024-10-07 09:45:54.056568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8c610 with addr=10.0.0.2, port=4420 00:24:54.530 [2024-10-07 09:45:54.056576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8c610 is same with the state(6) to be set 00:24:54.530 [2024-10-07 09:45:54.056583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:54.530 [2024-10-07 09:45:54.056590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:54.530 [2024-10-07 09:45:54.056597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:54.530 [2024-10-07 09:45:54.056643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.056988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.056996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.057005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.057012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.057027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.057035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.057044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.057051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.057061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.530 [2024-10-07 09:45:54.057068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.530 [2024-10-07 09:45:54.057078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.531 [2024-10-07 09:45:54.057731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.531 [2024-10-07 09:45:54.057739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a270 is same with the state(6) to be set 00:24:54.531 [2024-10-07 09:45:54.059007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.532 [2024-10-07 09:45:54.059619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.532 [2024-10-07 09:45:54.059629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.059986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.059993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.060003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.060011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.060022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.060029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.060039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.060046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.060056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.060063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.060073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.060080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.060090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.060097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.060106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.060114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.060122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b550 is same with the state(6) to be set 00:24:54.533 [2024-10-07 09:45:54.061389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.061418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.061439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.061459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.061479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.061499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.061519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.061536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.061553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.061570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.533 [2024-10-07 09:45:54.061587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.533 [2024-10-07 09:45:54.061595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.061983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.061990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.534 [2024-10-07 09:45:54.062276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.534 [2024-10-07 09:45:54.062283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.062505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.062513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1118400 is same with the state(6) to be set 00:24:54.535 [2024-10-07 09:45:54.064069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.535 [2024-10-07 09:45:54.064513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.535 [2024-10-07 09:45:54.064522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.064986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.064993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.065012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.065028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.065045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.065062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.065079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.065095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.065112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.065129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.065146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.065162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.065170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10781f0 is same with the state(6) to be set 00:24:54.536 [2024-10-07 09:45:54.066955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.536 [2024-10-07 09:45:54.066975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.536 [2024-10-07 09:45:54.066988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.066995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.537 [2024-10-07 09:45:54.067583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.537 [2024-10-07 09:45:54.067590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.067989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.067999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.068006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.068016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.068023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.068033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.068040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.068050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.538 [2024-10-07 09:45:54.068057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.538 [2024-10-07 09:45:54.068065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107abb0 is same with the state(6) to be set 00:24:54.538 [2024-10-07 09:45:54.070187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.538 [2024-10-07 09:45:54.070211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.538 [2024-10-07 09:45:54.070222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:54.538 [2024-10-07 09:45:54.070232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:54.538 [2024-10-07 09:45:54.070607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.538 [2024-10-07 09:45:54.070627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc72760 with addr=10.0.0.2, port=4420 00:24:54.538 [2024-10-07 09:45:54.070636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc72760 is same with the state(6) to be set 00:24:54.538 [2024-10-07 09:45:54.070648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8c610 (9): Bad file descriptor 00:24:54.538 [2024-10-07 09:45:54.070689] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:54.538 [2024-10-07 09:45:54.070703] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:54.538 [2024-10-07 09:45:54.070718] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:54.538 [2024-10-07 09:45:54.070729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc72760 (9): Bad file descriptor 00:24:54.538 [2024-10-07 09:45:54.071058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:54.538 task offset: 28160 on job bdev=Nvme5n1 fails 00:24:54.538 00:24:54.538 Latency(us) 00:24:54.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.538 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.538 Job: Nvme1n1 ended in about 0.97 seconds with error 00:24:54.538 Verification LBA range: start 0x0 length 0x400 00:24:54.538 Nvme1n1 : 0.97 132.34 8.27 66.17 0.00 318890.10 15619.41 242920.11 00:24:54.538 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.538 Job: Nvme2n1 ended in about 0.97 seconds with error 00:24:54.538 Verification LBA range: start 0x0 length 0x400 00:24:54.538 Nvme2n1 : 0.97 132.02 8.25 66.01 0.00 313126.40 21189.97 258648.75 00:24:54.538 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.538 Job: Nvme3n1 ended in about 0.97 seconds with error 00:24:54.538 Verification LBA range: start 0x0 length 0x400 00:24:54.538 Nvme3n1 : 0.97 197.54 12.35 65.85 0.00 230498.88 11141.12 251658.24 00:24:54.538 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.538 Job: Nvme4n1 ended in about 0.96 seconds with error 00:24:54.538 Verification LBA range: start 0x0 length 0x400 00:24:54.538 Nvme4n1 : 0.96 204.88 12.80 66.90 0.00 218456.90 18786.99 227191.47 00:24:54.538 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.538 Job: Nvme5n1 ended in about 0.95 seconds with error 00:24:54.538 Verification LBA range: start 0x0 length 0x400 00:24:54.538 Nvme5n1 : 0.95 201.26 12.58 67.09 0.00 216395.95 15947.09 235929.60 00:24:54.538 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.538 Job: Nvme6n1 ended in about 0.96 seconds with error 00:24:54.538 Verification LBA range: start 0x0 length 0x400 00:24:54.539 Nvme6n1 : 0.96 146.21 9.14 66.36 0.00 267507.83 18677.76 265639.25 00:24:54.539 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.539 Job: Nvme7n1 ended in about 0.96 seconds with error 00:24:54.539 Verification LBA range: start 0x0 length 0x400 00:24:54.539 Nvme7n1 : 0.96 199.38 12.46 66.46 0.00 208939.52 18786.99 246415.36 00:24:54.539 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.539 Job: Nvme8n1 ended in about 0.97 seconds with error 00:24:54.539 Verification LBA range: start 0x0 length 0x400 00:24:54.539 Nvme8n1 : 0.97 197.01 12.31 65.67 0.00 206999.68 16274.77 256901.12 00:24:54.539 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.539 Job: Nvme9n1 ended in about 0.96 seconds with error 00:24:54.539 Verification LBA range: start 0x0 length 0x400 00:24:54.539 Nvme9n1 : 0.96 200.98 12.56 66.99 0.00 197354.03 17476.27 253405.87 00:24:54.539 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.539 Job: Nvme10n1 ended in about 0.98 seconds with error 00:24:54.539 Verification LBA range: start 0x0 length 0x400 00:24:54.539 Nvme10n1 : 0.98 130.95 8.18 65.47 0.00 264035.84 15947.09 276125.01 00:24:54.539 =================================================================================================================== 00:24:54.539 Total : 1742.57 108.91 662.98 0.00 239159.02 11141.12 276125.01 00:24:54.539 [2024-10-07 09:45:54.097236] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:54.539 [2024-10-07 09:45:54.097285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:54.539 [2024-10-07 09:45:54.097740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.539 [2024-10-07 09:45:54.097761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc761d0 with addr=10.0.0.2, port=4420 00:24:54.539 [2024-10-07 09:45:54.097771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc761d0 is same with the state(6) to be set 00:24:54.539 [2024-10-07 09:45:54.098143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.539 [2024-10-07 09:45:54.098153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6cc10 with addr=10.0.0.2, port=4420 00:24:54.539 [2024-10-07 09:45:54.098161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6cc10 is same with the state(6) to be set 00:24:54.539 [2024-10-07 09:45:54.098450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.539 [2024-10-07 09:45:54.098460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc736f0 with addr=10.0.0.2, port=4420 00:24:54.539 [2024-10-07 09:45:54.098467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc736f0 is same with the state(6) to be set 00:24:54.539 [2024-10-07 09:45:54.098478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:54.539 [2024-10-07 09:45:54.098485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:54.539 [2024-10-07 09:45:54.098494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:54.539 [2024-10-07 09:45:54.099864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:54.539 [2024-10-07 09:45:54.099879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:54.539 [2024-10-07 09:45:54.099888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:54.539 [2024-10-07 09:45:54.099897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.539 [2024-10-07 09:45:54.100266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.539 [2024-10-07 09:45:54.100279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d0690 with addr=10.0.0.2, port=4420 00:24:54.539 [2024-10-07 09:45:54.100287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d0690 is same with the state(6) to be set 00:24:54.539 [2024-10-07 09:45:54.100500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.539 [2024-10-07 09:45:54.100510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10cf550 with addr=10.0.0.2, port=4420 00:24:54.539 [2024-10-07 09:45:54.100518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cf550 is same with the state(6) to be set 00:24:54.539 [2024-10-07 09:45:54.100537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc761d0 (9): Bad file descriptor 00:24:54.539 [2024-10-07 09:45:54.100549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6cc10 (9): Bad file descriptor 00:24:54.539 [2024-10-07 09:45:54.100558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc736f0 (9): Bad file descriptor 00:24:54.539 [2024-10-07 09:45:54.100567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:54.539 [2024-10-07 09:45:54.100573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:54.539 [2024-10-07 09:45:54.100580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:54.539 [2024-10-07 09:45:54.100625] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:54.539 [2024-10-07 09:45:54.100639] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:54.539 [2024-10-07 09:45:54.100649] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:54.539 [2024-10-07 09:45:54.100662] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:54.539 [2024-10-07 09:45:54.100959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.539 [2024-10-07 09:45:54.101313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.539 [2024-10-07 09:45:54.101325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6bbf0 with addr=10.0.0.2, port=4420 00:24:54.539 [2024-10-07 09:45:54.101333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6bbf0 is same with the state(6) to be set 00:24:54.539 [2024-10-07 09:45:54.101662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.539 [2024-10-07 09:45:54.101674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1097350 with addr=10.0.0.2, port=4420 00:24:54.539 [2024-10-07 09:45:54.101681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1097350 is same with the state(6) to be set 00:24:54.539 [2024-10-07 09:45:54.101986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.539 [2024-10-07 09:45:54.101996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d0480 with addr=10.0.0.2, port=4420 00:24:54.539 [2024-10-07 09:45:54.102004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d0480 is same with the state(6) to be set 00:24:54.539 [2024-10-07 09:45:54.102014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d0690 (9): Bad file descriptor 00:24:54.539 [2024-10-07 09:45:54.102024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cf550 (9): Bad file descriptor 00:24:54.539 [2024-10-07 09:45:54.102033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.539 [2024-10-07 09:45:54.102040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.539 [2024-10-07 09:45:54.102047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.539 [2024-10-07 09:45:54.102058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:54.539 [2024-10-07 09:45:54.102064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:54.539 [2024-10-07 09:45:54.102071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:54.539 [2024-10-07 09:45:54.102082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:54.539 [2024-10-07 09:45:54.102089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:54.539 [2024-10-07 09:45:54.102100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:54.539 [2024-10-07 09:45:54.102164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:54.539 [2024-10-07 09:45:54.102174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.539 [2024-10-07 09:45:54.102181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.539 [2024-10-07 09:45:54.102187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.539 [2024-10-07 09:45:54.102202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6bbf0 (9): Bad file descriptor 00:24:54.539 [2024-10-07 09:45:54.102212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1097350 (9): Bad file descriptor 00:24:54.539 [2024-10-07 09:45:54.102221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d0480 (9): Bad file descriptor 00:24:54.539 [2024-10-07 09:45:54.102229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:54.539 [2024-10-07 09:45:54.102236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:54.539 [2024-10-07 09:45:54.102243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:54.539 [2024-10-07 09:45:54.102253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:54.539 [2024-10-07 09:45:54.102259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:54.539 [2024-10-07 09:45:54.102267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:54.539 [2024-10-07 09:45:54.102295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.539 [2024-10-07 09:45:54.102303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.539 [2024-10-07 09:45:54.102653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.539 [2024-10-07 09:45:54.102665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8c610 with addr=10.0.0.2, port=4420 00:24:54.539 [2024-10-07 09:45:54.102673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8c610 is same with the state(6) to be set 00:24:54.539 [2024-10-07 09:45:54.102680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:54.539 [2024-10-07 09:45:54.102686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:54.539 [2024-10-07 09:45:54.102693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:54.539 [2024-10-07 09:45:54.102703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:54.539 [2024-10-07 09:45:54.102709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:54.539 [2024-10-07 09:45:54.102716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:54.539 [2024-10-07 09:45:54.102726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:54.540 [2024-10-07 09:45:54.102732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:54.540 [2024-10-07 09:45:54.102739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:54.540 [2024-10-07 09:45:54.102768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.540 [2024-10-07 09:45:54.102775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.540 [2024-10-07 09:45:54.102781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.540 [2024-10-07 09:45:54.102792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8c610 (9): Bad file descriptor 00:24:54.540 [2024-10-07 09:45:54.102819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:54.540 [2024-10-07 09:45:54.102826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:54.540 [2024-10-07 09:45:54.102833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:54.540 [2024-10-07 09:45:54.102863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.801 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3433893 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # local es=0 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # valid_exec_arg wait 3433893 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@641 -- # local arg=wait 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@645 -- # type -t wait 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@656 -- # wait 3433893 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@656 -- # es=255 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # es=127 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@666 -- # case "$es" in 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@673 -- # es=1 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.745 rmmod nvme_tcp 00:24:55.745 rmmod nvme_fabrics 00:24:55.745 rmmod nvme_keyring 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 3433553 ']' 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 3433553 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' -z 3433553 ']' 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # kill -0 3433553 00:24:55.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 957: kill: (3433553) - No such process 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@980 -- # echo 'Process with pid 3433553 is not found' 00:24:55.745 Process with pid 3433553 is not found 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.745 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:58.289 00:24:58.289 real 0m7.818s 00:24:58.289 user 0m18.950s 00:24:58.289 sys 0m1.320s 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # xtrace_disable 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:58.289 ************************************ 00:24:58.289 END TEST nvmf_shutdown_tc3 00:24:58.289 ************************************ 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1110 -- # xtrace_disable 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:58.289 ************************************ 00:24:58.289 START TEST nvmf_shutdown_tc4 00:24:58.289 ************************************ 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # nvmf_shutdown_tc4 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:58.289 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.289 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:58.290 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:58.290 Found net devices under 0000:31:00.0: cvl_0_0 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:58.290 Found net devices under 0000:31:00.1: cvl_0_1 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:24:58.290 00:24:58.290 --- 10.0.0.2 ping statistics --- 00:24:58.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.290 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:24:58.290 00:24:58.290 --- 10.0.0.1 ping statistics --- 00:24:58.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.290 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=3435082 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 3435082 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@834 -- # '[' -z 3435082 ']' 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:58.290 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:58.551 [2024-10-07 09:45:57.990540] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:58.551 [2024-10-07 09:45:57.990596] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.551 [2024-10-07 09:45:58.078666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.551 [2024-10-07 09:45:58.139138] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.551 [2024-10-07 09:45:58.139175] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.551 [2024-10-07 09:45:58.139181] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.551 [2024-10-07 09:45:58.139186] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.551 [2024-10-07 09:45:58.139190] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.551 [2024-10-07 09:45:58.140743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.551 [2024-10-07 09:45:58.140956] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.551 [2024-10-07 09:45:58.141097] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.551 [2024-10-07 09:45:58.141097] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:24:59.120 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:59.120 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@867 -- # return 0 00:24:59.120 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:59.120 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@733 -- # xtrace_disable 00:24:59.120 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:59.380 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.380 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:59.380 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:59.380 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:59.380 [2024-10-07 09:45:58.826125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.380 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:59.380 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:59.380 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:59.381 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:59.381 Malloc1 00:24:59.381 [2024-10-07 09:45:58.924885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.381 Malloc2 00:24:59.381 Malloc3 00:24:59.381 Malloc4 00:24:59.643 Malloc5 00:24:59.643 Malloc6 00:24:59.643 Malloc7 00:24:59.643 Malloc8 00:24:59.643 Malloc9 00:24:59.643 Malloc10 00:24:59.643 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:59.643 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:59.643 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@733 -- # xtrace_disable 00:24:59.643 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:59.903 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3435457 00:24:59.903 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:59.903 09:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:59.903 [2024-10-07 09:45:59.392957] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3435082 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@953 -- # '[' -z 3435082 ']' 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # kill -0 3435082 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # uname 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3435082 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3435082' 00:25:05.194 killing process with pid 3435082 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # kill 3435082 00:25:05.194 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # wait 3435082 00:25:05.194 [2024-10-07 09:46:04.399643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd440 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.399690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd440 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.399697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd440 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.399703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd440 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.399708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd440 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.399713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd440 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.399718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd440 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.399962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd910 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.399990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd910 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.399996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd910 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.400001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd910 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.400006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd910 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.400295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0590 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.400318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0590 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.400337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0590 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.400606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcf70 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.400636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcf70 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.400643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcf70 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.400648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcf70 is same with the state(6) to be set 00:25:05.194 [2024-10-07 09:46:04.400654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcf70 is same with the state(6) to be set 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 starting I/O failed: -6 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.194 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 [2024-10-07 09:46:04.402341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.195 NVMe io qpair process completion error 00:25:05.195 [2024-10-07 09:46:04.403278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc100 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc100 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc100 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc100 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc100 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc100 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc100 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc5d0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc5d0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc5d0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc5d0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc5d0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc5d0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc5d0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc5d0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcaa0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcaa0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcaa0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcaa0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcaa0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcaa0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcaa0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcaa0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcaa0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.403864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcaa0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0de0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0de0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e12b0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e12b0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e12b0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e12b0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e12b0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e12b0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e12b0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e12b0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1780 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1780 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1780 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1780 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1780 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1780 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1780 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.406908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1780 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.407140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0910 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.407161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0910 is same with the state(6) to be set 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 [2024-10-07 09:46:04.408752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2120 is same with the state(6) to be set 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 [2024-10-07 09:46:04.408769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2120 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.408774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2120 is same with the state(6) to be set 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 [2024-10-07 09:46:04.408957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e25f0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.408972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e25f0 is same with Write completed with error (sct=0, sc=8) 00:25:05.195 the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.408979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e25f0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.408984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e25f0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.408989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e25f0 is same with the state(6) to be set 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 [2024-10-07 09:46:04.408994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e25f0 is same with the state(6) to be set 00:25:05.195 starting I/O failed: -6 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 [2024-10-07 09:46:04.409248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2ac0 is same with the state(6) to be set 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 [2024-10-07 09:46:04.409263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2ac0 is same with the state(6) to be set 00:25:05.195 starting I/O failed: -6 00:25:05.195 [2024-10-07 09:46:04.409268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2ac0 is same with the state(6) to be set 00:25:05.195 [2024-10-07 09:46:04.409273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2ac0 is same with the state(6) to be set 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 [2024-10-07 09:46:04.409474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.195 starting I/O failed: -6 00:25:05.195 [2024-10-07 09:46:04.409647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1c50 is same with the state(6) to be set 00:25:05.195 Write completed with error (sct=0, sc=8) 00:25:05.196 [2024-10-07 09:46:04.409664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1c50 is same with the state(6) to be set 00:25:05.196 [2024-10-07 09:46:04.409670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1c50 is same with the state(6) to be set 00:25:05.196 [2024-10-07 09:46:04.409675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1c50 is same with the state(6) to be set 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 [2024-10-07 09:46:04.409680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1c50 is same with the state(6) to be set 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 [2024-10-07 09:46:04.409694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1c50 is same with the state(6) to be set 00:25:05.196 [2024-10-07 09:46:04.409701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1c50 is same with the state(6) to be set 00:25:05.196 starting I/O failed: -6 00:25:05.196 [2024-10-07 09:46:04.409706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1c50 is same with the state(6) to be set 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 [2024-10-07 09:46:04.409710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e1c50 is same with the state(6) to be set 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 [2024-10-07 09:46:04.410277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 [2024-10-07 09:46:04.411180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.196 Write completed with error (sct=0, sc=8) 00:25:05.196 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 [2024-10-07 09:46:04.412608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.197 NVMe io qpair process completion error 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 [2024-10-07 09:46:04.413482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 [2024-10-07 09:46:04.414314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.197 starting I/O failed: -6 00:25:05.197 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 [2024-10-07 09:46:04.415258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 [2024-10-07 09:46:04.416720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.198 NVMe io qpair process completion error 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 starting I/O failed: -6 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.198 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 [2024-10-07 09:46:04.417866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.199 starting I/O failed: -6 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 [2024-10-07 09:46:04.418824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 [2024-10-07 09:46:04.419730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.199 starting I/O failed: -6 00:25:05.199 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 [2024-10-07 09:46:04.422109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.200 NVMe io qpair process completion error 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 [2024-10-07 09:46:04.423402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 [2024-10-07 09:46:04.424274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 starting I/O failed: -6 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.200 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 [2024-10-07 09:46:04.425200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 [2024-10-07 09:46:04.426868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.201 NVMe io qpair process completion error 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 starting I/O failed: -6 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.201 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 [2024-10-07 09:46:04.427933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.202 starting I/O failed: -6 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 [2024-10-07 09:46:04.428888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 [2024-10-07 09:46:04.429828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.202 starting I/O failed: -6 00:25:05.202 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 [2024-10-07 09:46:04.432463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.203 NVMe io qpair process completion error 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 [2024-10-07 09:46:04.433624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 starting I/O failed: -6 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 Write completed with error (sct=0, sc=8) 00:25:05.203 [2024-10-07 09:46:04.434422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 [2024-10-07 09:46:04.435346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 [2024-10-07 09:46:04.436958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.204 NVMe io qpair process completion error 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 starting I/O failed: -6 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.204 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 [2024-10-07 09:46:04.438003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 [2024-10-07 09:46:04.438853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 [2024-10-07 09:46:04.439790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.205 starting I/O failed: -6 00:25:05.205 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 [2024-10-07 09:46:04.441233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.206 NVMe io qpair process completion error 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 [2024-10-07 09:46:04.444041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.206 NVMe io qpair process completion error 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 Write completed with error (sct=0, sc=8) 00:25:05.206 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 [2024-10-07 09:46:04.445181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 [2024-10-07 09:46:04.446004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 [2024-10-07 09:46:04.446932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.207 Write completed with error (sct=0, sc=8) 00:25:05.207 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 [2024-10-07 09:46:04.448745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.208 NVMe io qpair process completion error 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 [2024-10-07 09:46:04.450285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.208 starting I/O failed: -6 00:25:05.208 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 [2024-10-07 09:46:04.451309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 Write completed with error (sct=0, sc=8) 00:25:05.209 starting I/O failed: -6 00:25:05.209 [2024-10-07 09:46:04.453145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.209 NVMe io qpair process completion error 00:25:05.209 Initializing NVMe Controllers 00:25:05.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:25:05.209 Controller IO queue size 128, less than required. 00:25:05.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.209 Controller IO queue size 128, less than required. 00:25:05.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:25:05.209 Controller IO queue size 128, less than required. 00:25:05.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:25:05.209 Controller IO queue size 128, less than required. 00:25:05.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:25:05.209 Controller IO queue size 128, less than required. 00:25:05.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:25:05.209 Controller IO queue size 128, less than required. 00:25:05.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:25:05.210 Controller IO queue size 128, less than required. 00:25:05.210 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:25:05.210 Controller IO queue size 128, less than required. 00:25:05.210 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:25:05.210 Controller IO queue size 128, less than required. 00:25:05.210 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:25:05.210 Controller IO queue size 128, less than required. 00:25:05.210 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:05.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:05.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:05.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:05.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:05.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:05.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:05.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:05.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:05.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:05.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:05.210 Initialization complete. Launching workers. 00:25:05.210 ======================================================== 00:25:05.210 Latency(us) 00:25:05.210 Device Information : IOPS MiB/s Average min max 00:25:05.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1934.99 83.14 66165.34 842.56 116950.07 00:25:05.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1898.78 81.59 67441.54 831.41 151108.26 00:25:05.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1902.59 81.75 67329.24 682.56 122501.60 00:25:05.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1920.17 82.51 66743.29 605.51 116593.88 00:25:05.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1923.34 82.64 66655.88 677.69 119592.51 00:25:05.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1888.83 81.16 67907.74 838.78 124110.73 00:25:05.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1928.42 82.86 66531.89 786.13 125439.85 00:25:05.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1896.66 81.50 67502.20 482.48 127644.01 00:25:05.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1924.19 82.68 66692.21 823.57 120515.90 00:25:05.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1883.53 80.93 68149.93 889.95 117776.45 00:25:05.210 ======================================================== 00:25:05.210 Total : 19101.49 820.77 67106.43 482.48 151108.26 00:25:05.210 00:25:05.210 [2024-10-07 09:46:04.457423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308280 is same with the state(6) to be set 00:25:05.210 [2024-10-07 09:46:04.457466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130a430 is same with the state(6) to be set 00:25:05.210 [2024-10-07 09:46:04.457499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308fd0 is same with the state(6) to be set 00:25:05.210 [2024-10-07 09:46:04.457527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13085b0 is same with the state(6) to be set 00:25:05.210 [2024-10-07 09:46:04.457557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309b40 is same with the state(6) to be set 00:25:05.210 [2024-10-07 09:46:04.457586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13094e0 is same with the state(6) to be set 00:25:05.210 [2024-10-07 09:46:04.457614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13088e0 is same with the state(6) to be set 00:25:05.210 [2024-10-07 09:46:04.457658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130a760 is same with the state(6) to be set 00:25:05.210 [2024-10-07 09:46:04.457687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13091b0 is same with the state(6) to be set 00:25:05.210 [2024-10-07 09:46:04.457715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309810 is same with the state(6) to be set 00:25:05.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:05.210 09:46:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3435457 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # local es=0 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # valid_exec_arg wait 3435457 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@641 -- # local arg=wait 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@645 -- # type -t wait 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@656 -- # wait 3435457 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@656 -- # es=1 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.153 rmmod nvme_tcp 00:25:06.153 rmmod nvme_fabrics 00:25:06.153 rmmod nvme_keyring 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 3435082 ']' 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 3435082 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@953 -- # '[' -z 3435082 ']' 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # kill -0 3435082 00:25:06.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 957: kill: (3435082) - No such process 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@980 -- # echo 'Process with pid 3435082 is not found' 00:25:06.153 Process with pid 3435082 is not found 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.153 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.695 09:46:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.695 00:25:08.695 real 0m10.303s 00:25:08.695 user 0m27.682s 00:25:08.695 sys 0m4.028s 00:25:08.695 09:46:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:08.695 09:46:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:08.695 ************************************ 00:25:08.695 END TEST nvmf_shutdown_tc4 00:25:08.695 ************************************ 00:25:08.695 09:46:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:25:08.695 00:25:08.695 real 0m43.856s 00:25:08.695 user 1m46.024s 00:25:08.695 sys 0m13.929s 00:25:08.695 09:46:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:08.695 09:46:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:08.695 ************************************ 00:25:08.695 END TEST nvmf_shutdown 00:25:08.695 ************************************ 00:25:08.695 09:46:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:08.695 00:25:08.695 real 12m55.143s 00:25:08.695 user 27m8.637s 00:25:08.695 sys 3m50.829s 00:25:08.695 09:46:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:08.695 09:46:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:08.695 ************************************ 00:25:08.695 END TEST nvmf_target_extra 00:25:08.695 ************************************ 00:25:08.695 09:46:07 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:08.695 09:46:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:25:08.695 09:46:07 nvmf_tcp -- common/autotest_common.sh@1110 -- # xtrace_disable 00:25:08.695 09:46:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.695 ************************************ 00:25:08.695 START TEST nvmf_host 00:25:08.695 ************************************ 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:08.695 * Looking for test storage... 00:25:08.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1626 -- # lcov --version 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:25:08.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.695 --rc genhtml_branch_coverage=1 00:25:08.695 --rc genhtml_function_coverage=1 00:25:08.695 --rc genhtml_legend=1 00:25:08.695 --rc geninfo_all_blocks=1 00:25:08.695 --rc geninfo_unexecuted_blocks=1 00:25:08.695 00:25:08.695 ' 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:25:08.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.695 --rc genhtml_branch_coverage=1 00:25:08.695 --rc genhtml_function_coverage=1 00:25:08.695 --rc genhtml_legend=1 00:25:08.695 --rc geninfo_all_blocks=1 00:25:08.695 --rc geninfo_unexecuted_blocks=1 00:25:08.695 00:25:08.695 ' 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:25:08.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.695 --rc genhtml_branch_coverage=1 00:25:08.695 --rc genhtml_function_coverage=1 00:25:08.695 --rc genhtml_legend=1 00:25:08.695 --rc geninfo_all_blocks=1 00:25:08.695 --rc geninfo_unexecuted_blocks=1 00:25:08.695 00:25:08.695 ' 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:25:08.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.695 --rc genhtml_branch_coverage=1 00:25:08.695 --rc genhtml_function_coverage=1 00:25:08.695 --rc genhtml_legend=1 00:25:08.695 --rc geninfo_all_blocks=1 00:25:08.695 --rc geninfo_unexecuted_blocks=1 00:25:08.695 00:25:08.695 ' 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.695 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.696 ************************************ 00:25:08.696 START TEST nvmf_multicontroller 00:25:08.696 ************************************ 00:25:08.696 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:08.960 * Looking for test storage... 00:25:08.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1626 -- # lcov --version 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:08.960 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:25:08.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.961 --rc genhtml_branch_coverage=1 00:25:08.961 --rc genhtml_function_coverage=1 00:25:08.961 --rc genhtml_legend=1 00:25:08.961 --rc geninfo_all_blocks=1 00:25:08.961 --rc geninfo_unexecuted_blocks=1 00:25:08.961 00:25:08.961 ' 00:25:08.961 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:25:08.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.961 --rc genhtml_branch_coverage=1 00:25:08.961 --rc genhtml_function_coverage=1 00:25:08.961 --rc genhtml_legend=1 00:25:08.962 --rc geninfo_all_blocks=1 00:25:08.962 --rc geninfo_unexecuted_blocks=1 00:25:08.962 00:25:08.962 ' 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:25:08.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.962 --rc genhtml_branch_coverage=1 00:25:08.962 --rc genhtml_function_coverage=1 00:25:08.962 --rc genhtml_legend=1 00:25:08.962 --rc geninfo_all_blocks=1 00:25:08.962 --rc geninfo_unexecuted_blocks=1 00:25:08.962 00:25:08.962 ' 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:25:08.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.962 --rc genhtml_branch_coverage=1 00:25:08.962 --rc genhtml_function_coverage=1 00:25:08.962 --rc genhtml_legend=1 00:25:08.962 --rc geninfo_all_blocks=1 00:25:08.962 --rc geninfo_unexecuted_blocks=1 00:25:08.962 00:25:08.962 ' 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:08.962 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.963 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.963 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.963 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.963 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.963 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.963 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.963 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.963 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.964 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:08.965 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.966 09:46:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:17.104 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:17.105 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:17.105 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:17.105 Found net devices under 0000:31:00.0: cvl_0_0 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:17.105 Found net devices under 0000:31:00.1: cvl_0_1 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.105 09:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:25:17.105 00:25:17.105 --- 10.0.0.2 ping statistics --- 00:25:17.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.105 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:25:17.105 00:25:17.105 --- 10.0.0.1 ping statistics --- 00:25:17.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.105 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=3441167 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 3441167 00:25:17.105 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:17.106 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # '[' -z 3441167 ']' 00:25:17.106 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.106 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local max_retries=100 00:25:17.106 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.106 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@843 -- # xtrace_disable 00:25:17.106 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.106 [2024-10-07 09:46:16.152526] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:25:17.106 [2024-10-07 09:46:16.152594] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.106 [2024-10-07 09:46:16.245028] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:17.106 [2024-10-07 09:46:16.313892] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.106 [2024-10-07 09:46:16.313931] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.106 [2024-10-07 09:46:16.313939] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.106 [2024-10-07 09:46:16.313946] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.106 [2024-10-07 09:46:16.313951] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.106 [2024-10-07 09:46:16.315016] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.106 [2024-10-07 09:46:16.315166] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.106 [2024-10-07 09:46:16.315167] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.367 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:25:17.367 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@867 -- # return 0 00:25:17.367 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:17.367 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@733 -- # xtrace_disable 00:25:17.367 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.367 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.367 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:17.367 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.367 09:46:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.367 [2024-10-07 09:46:17.004028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.367 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.367 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:17.367 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.367 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.628 Malloc0 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.628 [2024-10-07 09:46:17.069251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.628 [2024-10-07 09:46:17.081178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.628 Malloc1 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3441301 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3441301 /var/tmp/bdevperf.sock 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # '[' -z 3441301 ']' 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local max_retries=100 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@843 -- # xtrace_disable 00:25:17.628 09:46:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@867 -- # return 0 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.602 NVMe0n1 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:18.602 1 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # local es=0 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@656 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.602 request: 00:25:18.602 { 00:25:18.602 "name": "NVMe0", 00:25:18.602 "trtype": "tcp", 00:25:18.602 "traddr": "10.0.0.2", 00:25:18.602 "adrfam": "ipv4", 00:25:18.602 "trsvcid": "4420", 00:25:18.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.602 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:18.602 "hostaddr": "10.0.0.1", 00:25:18.602 "prchk_reftag": false, 00:25:18.602 "prchk_guard": false, 00:25:18.602 "hdgst": false, 00:25:18.602 "ddgst": false, 00:25:18.602 "allow_unrecognized_csi": false, 00:25:18.602 "method": "bdev_nvme_attach_controller", 00:25:18.602 "req_id": 1 00:25:18.602 } 00:25:18.602 Got JSON-RPC error response 00:25:18.602 response: 00:25:18.602 { 00:25:18.602 "code": -114, 00:25:18.602 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:18.602 } 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@656 -- # es=1 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # local es=0 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@656 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.602 request: 00:25:18.602 { 00:25:18.602 "name": "NVMe0", 00:25:18.602 "trtype": "tcp", 00:25:18.602 "traddr": "10.0.0.2", 00:25:18.602 "adrfam": "ipv4", 00:25:18.602 "trsvcid": "4420", 00:25:18.602 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:18.602 "hostaddr": "10.0.0.1", 00:25:18.602 "prchk_reftag": false, 00:25:18.602 "prchk_guard": false, 00:25:18.602 "hdgst": false, 00:25:18.602 "ddgst": false, 00:25:18.602 "allow_unrecognized_csi": false, 00:25:18.602 "method": "bdev_nvme_attach_controller", 00:25:18.602 "req_id": 1 00:25:18.602 } 00:25:18.602 Got JSON-RPC error response 00:25:18.602 response: 00:25:18.602 { 00:25:18.602 "code": -114, 00:25:18.602 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:18.602 } 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@656 -- # es=1 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # local es=0 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@656 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.602 request: 00:25:18.602 { 00:25:18.602 "name": "NVMe0", 00:25:18.602 "trtype": "tcp", 00:25:18.602 "traddr": "10.0.0.2", 00:25:18.602 "adrfam": "ipv4", 00:25:18.602 "trsvcid": "4420", 00:25:18.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.602 "hostaddr": "10.0.0.1", 00:25:18.602 "prchk_reftag": false, 00:25:18.602 "prchk_guard": false, 00:25:18.602 "hdgst": false, 00:25:18.602 "ddgst": false, 00:25:18.602 "multipath": "disable", 00:25:18.602 "allow_unrecognized_csi": false, 00:25:18.602 "method": "bdev_nvme_attach_controller", 00:25:18.602 "req_id": 1 00:25:18.602 } 00:25:18.602 Got JSON-RPC error response 00:25:18.602 response: 00:25:18.602 { 00:25:18.602 "code": -114, 00:25:18.602 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:18.602 } 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@656 -- # es=1 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # local es=0 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@656 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:18.602 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:18.603 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.603 request: 00:25:18.603 { 00:25:18.603 "name": "NVMe0", 00:25:18.603 "trtype": "tcp", 00:25:18.603 "traddr": "10.0.0.2", 00:25:18.603 "adrfam": "ipv4", 00:25:18.603 "trsvcid": "4420", 00:25:18.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.603 "hostaddr": "10.0.0.1", 00:25:18.603 "prchk_reftag": false, 00:25:18.603 "prchk_guard": false, 00:25:18.603 "hdgst": false, 00:25:18.603 "ddgst": false, 00:25:18.603 "multipath": "failover", 00:25:18.603 "allow_unrecognized_csi": false, 00:25:18.603 "method": "bdev_nvme_attach_controller", 00:25:18.603 "req_id": 1 00:25:18.603 } 00:25:18.603 Got JSON-RPC error response 00:25:18.603 response: 00:25:18.603 { 00:25:18.603 "code": -114, 00:25:18.603 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:18.603 } 00:25:18.603 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:25:18.603 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@656 -- # es=1 00:25:18.603 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:25:18.603 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:25:18.603 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:25:18.603 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.603 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:18.603 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.868 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.868 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:18.868 09:46:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:20.253 { 00:25:20.253 "results": [ 00:25:20.253 { 00:25:20.253 "job": "NVMe0n1", 00:25:20.253 "core_mask": "0x1", 00:25:20.253 "workload": "write", 00:25:20.253 "status": "finished", 00:25:20.253 "queue_depth": 128, 00:25:20.253 "io_size": 4096, 00:25:20.253 "runtime": 1.00791, 00:25:20.253 "iops": 26836.72153267653, 00:25:20.253 "mibps": 104.83094348701769, 00:25:20.253 "io_failed": 0, 00:25:20.253 "io_timeout": 0, 00:25:20.253 "avg_latency_us": 4759.940465574821, 00:25:20.253 "min_latency_us": 2129.92, 00:25:20.253 "max_latency_us": 16820.906666666666 00:25:20.253 } 00:25:20.253 ], 00:25:20.253 "core_count": 1 00:25:20.253 } 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3441301 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' -z 3441301 ']' 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # kill -0 3441301 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # uname 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3441301 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3441301' 00:25:20.253 killing process with pid 3441301 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # kill 3441301 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@977 -- # wait 3441301 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:20.253 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1542 -- # read -r file 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1541 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1541 -- # sort -u 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1543 -- # cat 00:25:20.254 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:20.254 [2024-10-07 09:46:17.203967] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:25:20.254 [2024-10-07 09:46:17.204025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441301 ] 00:25:20.254 [2024-10-07 09:46:17.284625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.254 [2024-10-07 09:46:17.364112] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.254 [2024-10-07 09:46:18.398643] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 506a709b-982a-4ec1-b9c1-3b131cb4e660 already exists 00:25:20.254 [2024-10-07 09:46:18.398692] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:506a709b-982a-4ec1-b9c1-3b131cb4e660 alias for bdev NVMe1n1 00:25:20.254 [2024-10-07 09:46:18.398702] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:20.254 Running I/O for 1 seconds... 00:25:20.254 26813.00 IOPS, 104.74 MiB/s 00:25:20.254 Latency(us) 00:25:20.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.254 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:20.254 NVMe0n1 : 1.01 26836.72 104.83 0.00 0.00 4759.94 2129.92 16820.91 00:25:20.254 =================================================================================================================== 00:25:20.254 Total : 26836.72 104.83 0.00 0.00 4759.94 2129.92 16820.91 00:25:20.254 Received shutdown signal, test time was about 1.000000 seconds 00:25:20.254 00:25:20.254 Latency(us) 00:25:20.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.254 =================================================================================================================== 00:25:20.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.254 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1548 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1542 -- # read -r file 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:20.254 rmmod nvme_tcp 00:25:20.254 rmmod nvme_fabrics 00:25:20.254 rmmod nvme_keyring 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 3441167 ']' 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 3441167 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' -z 3441167 ']' 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # kill -0 3441167 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # uname 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:25:20.254 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3441167 00:25:20.514 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:25:20.514 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:25:20.514 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3441167' 00:25:20.514 killing process with pid 3441167 00:25:20.514 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # kill 3441167 00:25:20.514 09:46:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@977 -- # wait 3441167 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.514 09:46:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:23.056 00:25:23.056 real 0m13.876s 00:25:23.056 user 0m16.535s 00:25:23.056 sys 0m6.429s 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:23.056 ************************************ 00:25:23.056 END TEST nvmf_multicontroller 00:25:23.056 ************************************ 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.056 ************************************ 00:25:23.056 START TEST nvmf_aer 00:25:23.056 ************************************ 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:23.056 * Looking for test storage... 00:25:23.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1626 -- # lcov --version 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:25:23.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.056 --rc genhtml_branch_coverage=1 00:25:23.056 --rc genhtml_function_coverage=1 00:25:23.056 --rc genhtml_legend=1 00:25:23.056 --rc geninfo_all_blocks=1 00:25:23.056 --rc geninfo_unexecuted_blocks=1 00:25:23.056 00:25:23.056 ' 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:25:23.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.056 --rc genhtml_branch_coverage=1 00:25:23.056 --rc genhtml_function_coverage=1 00:25:23.056 --rc genhtml_legend=1 00:25:23.056 --rc geninfo_all_blocks=1 00:25:23.056 --rc geninfo_unexecuted_blocks=1 00:25:23.056 00:25:23.056 ' 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:25:23.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.056 --rc genhtml_branch_coverage=1 00:25:23.056 --rc genhtml_function_coverage=1 00:25:23.056 --rc genhtml_legend=1 00:25:23.056 --rc geninfo_all_blocks=1 00:25:23.056 --rc geninfo_unexecuted_blocks=1 00:25:23.056 00:25:23.056 ' 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:25:23.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.056 --rc genhtml_branch_coverage=1 00:25:23.056 --rc genhtml_function_coverage=1 00:25:23.056 --rc genhtml_legend=1 00:25:23.056 --rc geninfo_all_blocks=1 00:25:23.056 --rc geninfo_unexecuted_blocks=1 00:25:23.056 00:25:23.056 ' 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.056 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.057 09:46:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:31.205 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:31.205 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:31.205 Found net devices under 0000:31:00.0: cvl_0_0 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.205 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:31.206 Found net devices under 0000:31:00.1: cvl_0_1 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.206 09:46:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:31.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:25:31.206 00:25:31.206 --- 10.0.0.2 ping statistics --- 00:25:31.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.206 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:25:31.206 00:25:31.206 --- 10.0.0.1 ping statistics --- 00:25:31.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.206 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=3446232 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 3446232 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # '[' -z 3446232 ']' 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local max_retries=100 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@843 -- # xtrace_disable 00:25:31.206 09:46:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.206 [2024-10-07 09:46:30.382232] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:25:31.206 [2024-10-07 09:46:30.382300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.206 [2024-10-07 09:46:30.473348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.206 [2024-10-07 09:46:30.570318] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.206 [2024-10-07 09:46:30.570380] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.206 [2024-10-07 09:46:30.570393] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.206 [2024-10-07 09:46:30.570400] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.206 [2024-10-07 09:46:30.570406] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.206 [2024-10-07 09:46:30.572495] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.206 [2024-10-07 09:46:30.572532] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.206 [2024-10-07 09:46:30.572701] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.206 [2024-10-07 09:46:30.572700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@867 -- # return 0 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@733 -- # xtrace_disable 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.778 [2024-10-07 09:46:31.266804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.778 Malloc0 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.778 [2024-10-07 09:46:31.332489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.778 [ 00:25:31.778 { 00:25:31.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:31.778 "subtype": "Discovery", 00:25:31.778 "listen_addresses": [], 00:25:31.778 "allow_any_host": true, 00:25:31.778 "hosts": [] 00:25:31.778 }, 00:25:31.778 { 00:25:31.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.778 "subtype": "NVMe", 00:25:31.778 "listen_addresses": [ 00:25:31.778 { 00:25:31.778 "trtype": "TCP", 00:25:31.778 "adrfam": "IPv4", 00:25:31.778 "traddr": "10.0.0.2", 00:25:31.778 "trsvcid": "4420" 00:25:31.778 } 00:25:31.778 ], 00:25:31.778 "allow_any_host": true, 00:25:31.778 "hosts": [], 00:25:31.778 "serial_number": "SPDK00000000000001", 00:25:31.778 "model_number": "SPDK bdev Controller", 00:25:31.778 "max_namespaces": 2, 00:25:31.778 "min_cntlid": 1, 00:25:31.778 "max_cntlid": 65519, 00:25:31.778 "namespaces": [ 00:25:31.778 { 00:25:31.778 "nsid": 1, 00:25:31.778 "bdev_name": "Malloc0", 00:25:31.778 "name": "Malloc0", 00:25:31.778 "nguid": "451FFF91420E4E9F88F419605B05847E", 00:25:31.778 "uuid": "451fff91-420e-4e9f-88f4-19605b05847e" 00:25:31.778 } 00:25:31.778 ] 00:25:31.778 } 00:25:31.778 ] 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3446415 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- sync/functions.sh@10 -- # local i=0 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- sync/functions.sh@11 -- # [[ ! -e /tmp/aer_touch_file ]] 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- sync/functions.sh@11 -- # (( i++ < 200 )) 00:25:31.778 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- sync/functions.sh@12 -- # sleep 0.1 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- sync/functions.sh@11 -- # [[ ! -e /tmp/aer_touch_file ]] 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- sync/functions.sh@11 -- # (( i++ < 200 )) 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- sync/functions.sh@12 -- # sleep 0.1 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- sync/functions.sh@11 -- # [[ ! -e /tmp/aer_touch_file ]] 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- sync/functions.sh@15 -- # [[ ! -e /tmp/aer_touch_file ]] 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- sync/functions.sh@19 -- # return 0 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:32.040 Malloc1 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:32.040 Asynchronous Event Request test 00:25:32.040 Attaching to 10.0.0.2 00:25:32.040 Attached to 10.0.0.2 00:25:32.040 Registering asynchronous event callbacks... 00:25:32.040 Starting namespace attribute notice tests for all controllers... 00:25:32.040 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:32.040 aer_cb - Changed Namespace 00:25:32.040 Cleaning up... 00:25:32.040 [ 00:25:32.040 { 00:25:32.040 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:32.040 "subtype": "Discovery", 00:25:32.040 "listen_addresses": [], 00:25:32.040 "allow_any_host": true, 00:25:32.040 "hosts": [] 00:25:32.040 }, 00:25:32.040 { 00:25:32.040 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.040 "subtype": "NVMe", 00:25:32.040 "listen_addresses": [ 00:25:32.040 { 00:25:32.040 "trtype": "TCP", 00:25:32.040 "adrfam": "IPv4", 00:25:32.040 "traddr": "10.0.0.2", 00:25:32.040 "trsvcid": "4420" 00:25:32.040 } 00:25:32.040 ], 00:25:32.040 "allow_any_host": true, 00:25:32.040 "hosts": [], 00:25:32.040 "serial_number": "SPDK00000000000001", 00:25:32.040 "model_number": "SPDK bdev Controller", 00:25:32.040 "max_namespaces": 2, 00:25:32.040 "min_cntlid": 1, 00:25:32.040 "max_cntlid": 65519, 00:25:32.040 "namespaces": [ 00:25:32.040 { 00:25:32.040 "nsid": 1, 00:25:32.040 "bdev_name": "Malloc0", 00:25:32.040 "name": "Malloc0", 00:25:32.040 "nguid": "451FFF91420E4E9F88F419605B05847E", 00:25:32.040 "uuid": "451fff91-420e-4e9f-88f4-19605b05847e" 00:25:32.040 }, 00:25:32.040 { 00:25:32.040 "nsid": 2, 00:25:32.040 "bdev_name": "Malloc1", 00:25:32.040 "name": "Malloc1", 00:25:32.040 "nguid": "55652E280CC24DED8D3823E20FD446F9", 00:25:32.040 "uuid": "55652e28-0cc2-4ded-8d38-23e20fd446f9" 00:25:32.040 } 00:25:32.040 ] 00:25:32.040 } 00:25:32.040 ] 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3446415 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:32.040 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.302 rmmod nvme_tcp 00:25:32.302 rmmod nvme_fabrics 00:25:32.302 rmmod nvme_keyring 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 3446232 ']' 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 3446232 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' -z 3446232 ']' 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # kill -0 3446232 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # uname 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3446232 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3446232' 00:25:32.302 killing process with pid 3446232 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # kill 3446232 00:25:32.302 09:46:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@977 -- # wait 3446232 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.563 09:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.508 09:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:34.508 00:25:34.508 real 0m11.867s 00:25:34.508 user 0m8.135s 00:25:34.508 sys 0m6.416s 00:25:34.508 09:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:34.508 09:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:34.508 ************************************ 00:25:34.508 END TEST nvmf_aer 00:25:34.508 ************************************ 00:25:34.769 09:46:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:34.769 09:46:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:25:34.769 09:46:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:25:34.769 09:46:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.769 ************************************ 00:25:34.769 START TEST nvmf_async_init 00:25:34.769 ************************************ 00:25:34.769 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:34.769 * Looking for test storage... 00:25:34.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.769 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:25:34.769 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1626 -- # lcov --version 00:25:34.769 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:25:35.030 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:25:35.030 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.030 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.030 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.030 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.030 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.030 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.030 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:25:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.031 --rc genhtml_branch_coverage=1 00:25:35.031 --rc genhtml_function_coverage=1 00:25:35.031 --rc genhtml_legend=1 00:25:35.031 --rc geninfo_all_blocks=1 00:25:35.031 --rc geninfo_unexecuted_blocks=1 00:25:35.031 00:25:35.031 ' 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:25:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.031 --rc genhtml_branch_coverage=1 00:25:35.031 --rc genhtml_function_coverage=1 00:25:35.031 --rc genhtml_legend=1 00:25:35.031 --rc geninfo_all_blocks=1 00:25:35.031 --rc geninfo_unexecuted_blocks=1 00:25:35.031 00:25:35.031 ' 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:25:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.031 --rc genhtml_branch_coverage=1 00:25:35.031 --rc genhtml_function_coverage=1 00:25:35.031 --rc genhtml_legend=1 00:25:35.031 --rc geninfo_all_blocks=1 00:25:35.031 --rc geninfo_unexecuted_blocks=1 00:25:35.031 00:25:35.031 ' 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:25:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.031 --rc genhtml_branch_coverage=1 00:25:35.031 --rc genhtml_function_coverage=1 00:25:35.031 --rc genhtml_legend=1 00:25:35.031 --rc geninfo_all_blocks=1 00:25:35.031 --rc geninfo_unexecuted_blocks=1 00:25:35.031 00:25:35.031 ' 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.031 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a92789d27fd842f69764d65e3094f31f 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:35.032 09:46:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.174 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.174 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:43.174 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:43.174 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:43.174 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:43.174 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:43.174 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:43.174 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:43.174 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:43.175 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:43.175 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:43.175 Found net devices under 0000:31:00.0: cvl_0_0 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:43.175 Found net devices under 0000:31:00.1: cvl_0_1 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.175 09:46:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:43.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:25:43.175 00:25:43.175 --- 10.0.0.2 ping statistics --- 00:25:43.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.175 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:43.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:25:43.175 00:25:43.175 --- 10.0.0.1 ping statistics --- 00:25:43.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.175 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=3450815 00:25:43.175 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 3450815 00:25:43.176 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:43.176 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # '[' -z 3450815 ']' 00:25:43.176 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.176 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local max_retries=100 00:25:43.176 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.176 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@843 -- # xtrace_disable 00:25:43.176 09:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.176 [2024-10-07 09:46:42.308867] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:25:43.176 [2024-10-07 09:46:42.308934] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.176 [2024-10-07 09:46:42.400489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.176 [2024-10-07 09:46:42.494327] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.176 [2024-10-07 09:46:42.494394] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.176 [2024-10-07 09:46:42.494409] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.176 [2024-10-07 09:46:42.494417] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.176 [2024-10-07 09:46:42.494423] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.176 [2024-10-07 09:46:42.495241] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@867 -- # return 0 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@733 -- # xtrace_disable 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.748 [2024-10-07 09:46:43.189320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.748 null0 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a92789d27fd842f69764d65e3094f31f 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:43.748 [2024-10-07 09:46:43.249759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:43.748 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.009 nvme0n1 00:25:44.009 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.009 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:44.009 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.009 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.009 [ 00:25:44.009 { 00:25:44.009 "name": "nvme0n1", 00:25:44.009 "aliases": [ 00:25:44.009 "a92789d2-7fd8-42f6-9764-d65e3094f31f" 00:25:44.009 ], 00:25:44.009 "product_name": "NVMe disk", 00:25:44.009 "block_size": 512, 00:25:44.009 "num_blocks": 2097152, 00:25:44.009 "uuid": "a92789d2-7fd8-42f6-9764-d65e3094f31f", 00:25:44.009 "numa_id": 0, 00:25:44.009 "assigned_rate_limits": { 00:25:44.009 "rw_ios_per_sec": 0, 00:25:44.009 "rw_mbytes_per_sec": 0, 00:25:44.009 "r_mbytes_per_sec": 0, 00:25:44.009 "w_mbytes_per_sec": 0 00:25:44.009 }, 00:25:44.009 "claimed": false, 00:25:44.009 "zoned": false, 00:25:44.009 "supported_io_types": { 00:25:44.009 "read": true, 00:25:44.009 "write": true, 00:25:44.009 "unmap": false, 00:25:44.009 "flush": true, 00:25:44.009 "reset": true, 00:25:44.009 "nvme_admin": true, 00:25:44.009 "nvme_io": true, 00:25:44.009 "nvme_io_md": false, 00:25:44.009 "write_zeroes": true, 00:25:44.009 "zcopy": false, 00:25:44.009 "get_zone_info": false, 00:25:44.009 "zone_management": false, 00:25:44.009 "zone_append": false, 00:25:44.009 "compare": true, 00:25:44.009 "compare_and_write": true, 00:25:44.009 "abort": true, 00:25:44.009 "seek_hole": false, 00:25:44.009 "seek_data": false, 00:25:44.009 "copy": true, 00:25:44.009 "nvme_iov_md": false 00:25:44.009 }, 00:25:44.009 "memory_domains": [ 00:25:44.009 { 00:25:44.009 "dma_device_id": "system", 00:25:44.009 "dma_device_type": 1 00:25:44.009 } 00:25:44.009 ], 00:25:44.010 "driver_specific": { 00:25:44.010 "nvme": [ 00:25:44.010 { 00:25:44.010 "trid": { 00:25:44.010 "trtype": "TCP", 00:25:44.010 "adrfam": "IPv4", 00:25:44.010 "traddr": "10.0.0.2", 00:25:44.010 "trsvcid": "4420", 00:25:44.010 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:44.010 }, 00:25:44.010 "ctrlr_data": { 00:25:44.010 "cntlid": 1, 00:25:44.010 "vendor_id": "0x8086", 00:25:44.010 "model_number": "SPDK bdev Controller", 00:25:44.010 "serial_number": "00000000000000000000", 00:25:44.010 "firmware_revision": "25.01", 00:25:44.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.010 "oacs": { 00:25:44.010 "security": 0, 00:25:44.010 "format": 0, 00:25:44.010 "firmware": 0, 00:25:44.010 "ns_manage": 0 00:25:44.010 }, 00:25:44.010 "multi_ctrlr": true, 00:25:44.010 "ana_reporting": false 00:25:44.010 }, 00:25:44.010 "vs": { 00:25:44.010 "nvme_version": "1.3" 00:25:44.010 }, 00:25:44.010 "ns_data": { 00:25:44.010 "id": 1, 00:25:44.010 "can_share": true 00:25:44.010 } 00:25:44.010 } 00:25:44.010 ], 00:25:44.010 "mp_policy": "active_passive" 00:25:44.010 } 00:25:44.010 } 00:25:44.010 ] 00:25:44.010 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.010 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:44.010 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.010 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.010 [2024-10-07 09:46:43.527523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.010 [2024-10-07 09:46:43.527627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee53e0 (9): Bad file descriptor 00:25:44.010 [2024-10-07 09:46:43.659734] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:44.010 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.010 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:44.010 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.010 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.270 [ 00:25:44.270 { 00:25:44.270 "name": "nvme0n1", 00:25:44.270 "aliases": [ 00:25:44.270 "a92789d2-7fd8-42f6-9764-d65e3094f31f" 00:25:44.270 ], 00:25:44.270 "product_name": "NVMe disk", 00:25:44.270 "block_size": 512, 00:25:44.270 "num_blocks": 2097152, 00:25:44.270 "uuid": "a92789d2-7fd8-42f6-9764-d65e3094f31f", 00:25:44.270 "numa_id": 0, 00:25:44.270 "assigned_rate_limits": { 00:25:44.270 "rw_ios_per_sec": 0, 00:25:44.270 "rw_mbytes_per_sec": 0, 00:25:44.270 "r_mbytes_per_sec": 0, 00:25:44.270 "w_mbytes_per_sec": 0 00:25:44.270 }, 00:25:44.270 "claimed": false, 00:25:44.270 "zoned": false, 00:25:44.270 "supported_io_types": { 00:25:44.270 "read": true, 00:25:44.270 "write": true, 00:25:44.270 "unmap": false, 00:25:44.270 "flush": true, 00:25:44.270 "reset": true, 00:25:44.270 "nvme_admin": true, 00:25:44.270 "nvme_io": true, 00:25:44.270 "nvme_io_md": false, 00:25:44.270 "write_zeroes": true, 00:25:44.270 "zcopy": false, 00:25:44.270 "get_zone_info": false, 00:25:44.270 "zone_management": false, 00:25:44.270 "zone_append": false, 00:25:44.270 "compare": true, 00:25:44.270 "compare_and_write": true, 00:25:44.270 "abort": true, 00:25:44.270 "seek_hole": false, 00:25:44.270 "seek_data": false, 00:25:44.270 "copy": true, 00:25:44.270 "nvme_iov_md": false 00:25:44.270 }, 00:25:44.270 "memory_domains": [ 00:25:44.270 { 00:25:44.270 "dma_device_id": "system", 00:25:44.270 "dma_device_type": 1 00:25:44.270 } 00:25:44.270 ], 00:25:44.270 "driver_specific": { 00:25:44.270 "nvme": [ 00:25:44.270 { 00:25:44.270 "trid": { 00:25:44.270 "trtype": "TCP", 00:25:44.270 "adrfam": "IPv4", 00:25:44.270 "traddr": "10.0.0.2", 00:25:44.270 "trsvcid": "4420", 00:25:44.270 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:44.270 }, 00:25:44.270 "ctrlr_data": { 00:25:44.270 "cntlid": 2, 00:25:44.270 "vendor_id": "0x8086", 00:25:44.270 "model_number": "SPDK bdev Controller", 00:25:44.270 "serial_number": "00000000000000000000", 00:25:44.270 "firmware_revision": "25.01", 00:25:44.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.270 "oacs": { 00:25:44.270 "security": 0, 00:25:44.270 "format": 0, 00:25:44.270 "firmware": 0, 00:25:44.270 "ns_manage": 0 00:25:44.270 }, 00:25:44.270 "multi_ctrlr": true, 00:25:44.270 "ana_reporting": false 00:25:44.270 }, 00:25:44.270 "vs": { 00:25:44.271 "nvme_version": "1.3" 00:25:44.271 }, 00:25:44.271 "ns_data": { 00:25:44.271 "id": 1, 00:25:44.271 "can_share": true 00:25:44.271 } 00:25:44.271 } 00:25:44.271 ], 00:25:44.271 "mp_policy": "active_passive" 00:25:44.271 } 00:25:44.271 } 00:25:44.271 ] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.CwrKNnP7WE 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.CwrKNnP7WE 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.CwrKNnP7WE 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.271 [2024-10-07 09:46:43.748272] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:44.271 [2024-10-07 09:46:43.748450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.271 [2024-10-07 09:46:43.772352] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:44.271 nvme0n1 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.271 [ 00:25:44.271 { 00:25:44.271 "name": "nvme0n1", 00:25:44.271 "aliases": [ 00:25:44.271 "a92789d2-7fd8-42f6-9764-d65e3094f31f" 00:25:44.271 ], 00:25:44.271 "product_name": "NVMe disk", 00:25:44.271 "block_size": 512, 00:25:44.271 "num_blocks": 2097152, 00:25:44.271 "uuid": "a92789d2-7fd8-42f6-9764-d65e3094f31f", 00:25:44.271 "numa_id": 0, 00:25:44.271 "assigned_rate_limits": { 00:25:44.271 "rw_ios_per_sec": 0, 00:25:44.271 "rw_mbytes_per_sec": 0, 00:25:44.271 "r_mbytes_per_sec": 0, 00:25:44.271 "w_mbytes_per_sec": 0 00:25:44.271 }, 00:25:44.271 "claimed": false, 00:25:44.271 "zoned": false, 00:25:44.271 "supported_io_types": { 00:25:44.271 "read": true, 00:25:44.271 "write": true, 00:25:44.271 "unmap": false, 00:25:44.271 "flush": true, 00:25:44.271 "reset": true, 00:25:44.271 "nvme_admin": true, 00:25:44.271 "nvme_io": true, 00:25:44.271 "nvme_io_md": false, 00:25:44.271 "write_zeroes": true, 00:25:44.271 "zcopy": false, 00:25:44.271 "get_zone_info": false, 00:25:44.271 "zone_management": false, 00:25:44.271 "zone_append": false, 00:25:44.271 "compare": true, 00:25:44.271 "compare_and_write": true, 00:25:44.271 "abort": true, 00:25:44.271 "seek_hole": false, 00:25:44.271 "seek_data": false, 00:25:44.271 "copy": true, 00:25:44.271 "nvme_iov_md": false 00:25:44.271 }, 00:25:44.271 "memory_domains": [ 00:25:44.271 { 00:25:44.271 "dma_device_id": "system", 00:25:44.271 "dma_device_type": 1 00:25:44.271 } 00:25:44.271 ], 00:25:44.271 "driver_specific": { 00:25:44.271 "nvme": [ 00:25:44.271 { 00:25:44.271 "trid": { 00:25:44.271 "trtype": "TCP", 00:25:44.271 "adrfam": "IPv4", 00:25:44.271 "traddr": "10.0.0.2", 00:25:44.271 "trsvcid": "4421", 00:25:44.271 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:44.271 }, 00:25:44.271 "ctrlr_data": { 00:25:44.271 "cntlid": 3, 00:25:44.271 "vendor_id": "0x8086", 00:25:44.271 "model_number": "SPDK bdev Controller", 00:25:44.271 "serial_number": "00000000000000000000", 00:25:44.271 "firmware_revision": "25.01", 00:25:44.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.271 "oacs": { 00:25:44.271 "security": 0, 00:25:44.271 "format": 0, 00:25:44.271 "firmware": 0, 00:25:44.271 "ns_manage": 0 00:25:44.271 }, 00:25:44.271 "multi_ctrlr": true, 00:25:44.271 "ana_reporting": false 00:25:44.271 }, 00:25:44.271 "vs": { 00:25:44.271 "nvme_version": "1.3" 00:25:44.271 }, 00:25:44.271 "ns_data": { 00:25:44.271 "id": 1, 00:25:44.271 "can_share": true 00:25:44.271 } 00:25:44.271 } 00:25:44.271 ], 00:25:44.271 "mp_policy": "active_passive" 00:25:44.271 } 00:25:44.271 } 00:25:44.271 ] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.CwrKNnP7WE 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.271 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.271 rmmod nvme_tcp 00:25:44.271 rmmod nvme_fabrics 00:25:44.532 rmmod nvme_keyring 00:25:44.532 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.532 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:44.532 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:44.532 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 3450815 ']' 00:25:44.532 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 3450815 00:25:44.532 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' -z 3450815 ']' 00:25:44.532 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # kill -0 3450815 00:25:44.532 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # uname 00:25:44.532 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:25:44.532 09:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3450815 00:25:44.532 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:25:44.532 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:25:44.532 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3450815' 00:25:44.532 killing process with pid 3450815 00:25:44.532 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # kill 3450815 00:25:44.532 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@977 -- # wait 3450815 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.792 09:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.704 09:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.704 00:25:46.704 real 0m12.088s 00:25:46.704 user 0m4.402s 00:25:46.704 sys 0m6.266s 00:25:46.704 09:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:46.704 09:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:46.704 ************************************ 00:25:46.704 END TEST nvmf_async_init 00:25:46.704 ************************************ 00:25:46.704 09:46:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:46.704 09:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:25:46.704 09:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:25:46.704 09:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.965 ************************************ 00:25:46.965 START TEST dma 00:25:46.965 ************************************ 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:46.965 * Looking for test storage... 00:25:46.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1626 -- # lcov --version 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.965 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:47.253 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:47.253 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.253 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:47.253 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.253 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:25:47.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.254 --rc genhtml_branch_coverage=1 00:25:47.254 --rc genhtml_function_coverage=1 00:25:47.254 --rc genhtml_legend=1 00:25:47.254 --rc geninfo_all_blocks=1 00:25:47.254 --rc geninfo_unexecuted_blocks=1 00:25:47.254 00:25:47.254 ' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:25:47.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.254 --rc genhtml_branch_coverage=1 00:25:47.254 --rc genhtml_function_coverage=1 00:25:47.254 --rc genhtml_legend=1 00:25:47.254 --rc geninfo_all_blocks=1 00:25:47.254 --rc geninfo_unexecuted_blocks=1 00:25:47.254 00:25:47.254 ' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:25:47.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.254 --rc genhtml_branch_coverage=1 00:25:47.254 --rc genhtml_function_coverage=1 00:25:47.254 --rc genhtml_legend=1 00:25:47.254 --rc geninfo_all_blocks=1 00:25:47.254 --rc geninfo_unexecuted_blocks=1 00:25:47.254 00:25:47.254 ' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:25:47.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.254 --rc genhtml_branch_coverage=1 00:25:47.254 --rc genhtml_function_coverage=1 00:25:47.254 --rc genhtml_legend=1 00:25:47.254 --rc geninfo_all_blocks=1 00:25:47.254 --rc geninfo_unexecuted_blocks=1 00:25:47.254 00:25:47.254 ' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:47.254 00:25:47.254 real 0m0.279s 00:25:47.254 user 0m0.153s 00:25:47.254 sys 0m0.142s 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:47.254 ************************************ 00:25:47.254 END TEST dma 00:25:47.254 ************************************ 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.254 ************************************ 00:25:47.254 START TEST nvmf_identify 00:25:47.254 ************************************ 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:47.254 * Looking for test storage... 00:25:47.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1626 -- # lcov --version 00:25:47.254 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:25:47.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.556 --rc genhtml_branch_coverage=1 00:25:47.556 --rc genhtml_function_coverage=1 00:25:47.556 --rc genhtml_legend=1 00:25:47.556 --rc geninfo_all_blocks=1 00:25:47.556 --rc geninfo_unexecuted_blocks=1 00:25:47.556 00:25:47.556 ' 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:25:47.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.556 --rc genhtml_branch_coverage=1 00:25:47.556 --rc genhtml_function_coverage=1 00:25:47.556 --rc genhtml_legend=1 00:25:47.556 --rc geninfo_all_blocks=1 00:25:47.556 --rc geninfo_unexecuted_blocks=1 00:25:47.556 00:25:47.556 ' 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:25:47.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.556 --rc genhtml_branch_coverage=1 00:25:47.556 --rc genhtml_function_coverage=1 00:25:47.556 --rc genhtml_legend=1 00:25:47.556 --rc geninfo_all_blocks=1 00:25:47.556 --rc geninfo_unexecuted_blocks=1 00:25:47.556 00:25:47.556 ' 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:25:47.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.556 --rc genhtml_branch_coverage=1 00:25:47.556 --rc genhtml_function_coverage=1 00:25:47.556 --rc genhtml_legend=1 00:25:47.556 --rc geninfo_all_blocks=1 00:25:47.556 --rc geninfo_unexecuted_blocks=1 00:25:47.556 00:25:47.556 ' 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:47.556 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.556 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.557 09:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:55.707 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:55.707 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:55.707 Found net devices under 0000:31:00.0: cvl_0_0 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:55.707 Found net devices under 0000:31:00.1: cvl_0_1 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.707 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:55.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:25:55.708 00:25:55.708 --- 10.0.0.2 ping statistics --- 00:25:55.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.708 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:25:55.708 00:25:55.708 --- 10.0.0.1 ping statistics --- 00:25:55.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.708 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3455625 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3455625 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # '[' -z 3455625 ']' 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local max_retries=100 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@843 -- # xtrace_disable 00:25:55.708 09:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:55.708 [2024-10-07 09:46:54.892120] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:25:55.708 [2024-10-07 09:46:54.892189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.708 [2024-10-07 09:46:54.982965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.708 [2024-10-07 09:46:55.079935] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.708 [2024-10-07 09:46:55.079989] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.708 [2024-10-07 09:46:55.079998] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.708 [2024-10-07 09:46:55.080006] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.708 [2024-10-07 09:46:55.080012] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.708 [2024-10-07 09:46:55.081960] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.708 [2024-10-07 09:46:55.082121] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.708 [2024-10-07 09:46:55.082276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.708 [2024-10-07 09:46:55.082277] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:56.285 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@867 -- # return 0 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.286 [2024-10-07 09:46:55.719245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@733 -- # xtrace_disable 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.286 Malloc0 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.286 [2024-10-07 09:46:55.829176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.286 [ 00:25:56.286 { 00:25:56.286 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:56.286 "subtype": "Discovery", 00:25:56.286 "listen_addresses": [ 00:25:56.286 { 00:25:56.286 "trtype": "TCP", 00:25:56.286 "adrfam": "IPv4", 00:25:56.286 "traddr": "10.0.0.2", 00:25:56.286 "trsvcid": "4420" 00:25:56.286 } 00:25:56.286 ], 00:25:56.286 "allow_any_host": true, 00:25:56.286 "hosts": [] 00:25:56.286 }, 00:25:56.286 { 00:25:56.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.286 "subtype": "NVMe", 00:25:56.286 "listen_addresses": [ 00:25:56.286 { 00:25:56.286 "trtype": "TCP", 00:25:56.286 "adrfam": "IPv4", 00:25:56.286 "traddr": "10.0.0.2", 00:25:56.286 "trsvcid": "4420" 00:25:56.286 } 00:25:56.286 ], 00:25:56.286 "allow_any_host": true, 00:25:56.286 "hosts": [], 00:25:56.286 "serial_number": "SPDK00000000000001", 00:25:56.286 "model_number": "SPDK bdev Controller", 00:25:56.286 "max_namespaces": 32, 00:25:56.286 "min_cntlid": 1, 00:25:56.286 "max_cntlid": 65519, 00:25:56.286 "namespaces": [ 00:25:56.286 { 00:25:56.286 "nsid": 1, 00:25:56.286 "bdev_name": "Malloc0", 00:25:56.286 "name": "Malloc0", 00:25:56.286 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:56.286 "eui64": "ABCDEF0123456789", 00:25:56.286 "uuid": "91957016-2bf7-4588-b2dc-77803a2049e6" 00:25:56.286 } 00:25:56.286 ] 00:25:56.286 } 00:25:56.286 ] 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:56.286 09:46:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:56.286 [2024-10-07 09:46:55.892256] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:25:56.286 [2024-10-07 09:46:55.892316] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455911 ] 00:25:56.286 [2024-10-07 09:46:55.928860] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:56.286 [2024-10-07 09:46:55.928922] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:56.286 [2024-10-07 09:46:55.928928] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:56.286 [2024-10-07 09:46:55.928945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:56.286 [2024-10-07 09:46:55.928957] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:56.286 [2024-10-07 09:46:55.929773] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:56.286 [2024-10-07 09:46:55.929822] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16ed620 0 00:25:56.286 [2024-10-07 09:46:55.943647] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:56.286 [2024-10-07 09:46:55.943667] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:56.287 [2024-10-07 09:46:55.943672] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:56.287 [2024-10-07 09:46:55.943676] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:56.287 [2024-10-07 09:46:55.943717] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.287 [2024-10-07 09:46:55.943724] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.287 [2024-10-07 09:46:55.943729] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ed620) 00:25:56.287 [2024-10-07 09:46:55.943745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:56.287 [2024-10-07 09:46:55.943768] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d480, cid 0, qid 0 00:25:56.554 [2024-10-07 09:46:55.951633] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.554 [2024-10-07 09:46:55.951647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.554 [2024-10-07 09:46:55.951656] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.554 [2024-10-07 09:46:55.951661] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d480) on tqpair=0x16ed620 00:25:56.554 [2024-10-07 09:46:55.951676] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:56.554 [2024-10-07 09:46:55.951688] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:56.554 [2024-10-07 09:46:55.951694] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:56.554 [2024-10-07 09:46:55.951711] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.554 [2024-10-07 09:46:55.951716] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.554 [2024-10-07 09:46:55.951719] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ed620) 00:25:56.554 [2024-10-07 09:46:55.951731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.554 [2024-10-07 09:46:55.951750] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d480, cid 0, qid 0 00:25:56.554 [2024-10-07 09:46:55.951992] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.554 [2024-10-07 09:46:55.951999] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.554 [2024-10-07 09:46:55.952003] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.554 [2024-10-07 09:46:55.952007] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d480) on tqpair=0x16ed620 00:25:56.554 [2024-10-07 09:46:55.952012] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:56.554 [2024-10-07 09:46:55.952020] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:56.554 [2024-10-07 09:46:55.952028] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.554 [2024-10-07 09:46:55.952033] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.554 [2024-10-07 09:46:55.952037] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ed620) 00:25:56.555 [2024-10-07 09:46:55.952044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.555 [2024-10-07 09:46:55.952056] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d480, cid 0, qid 0 00:25:56.555 [2024-10-07 09:46:55.952285] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.555 [2024-10-07 09:46:55.952293] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.555 [2024-10-07 09:46:55.952296] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.952300] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d480) on tqpair=0x16ed620 00:25:56.555 [2024-10-07 09:46:55.952306] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:56.555 [2024-10-07 09:46:55.952314] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:56.555 [2024-10-07 09:46:55.952321] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.952325] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.952330] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ed620) 00:25:56.555 [2024-10-07 09:46:55.952339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.555 [2024-10-07 09:46:55.952349] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d480, cid 0, qid 0 00:25:56.555 [2024-10-07 09:46:55.952588] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.555 [2024-10-07 09:46:55.952595] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.555 [2024-10-07 09:46:55.952602] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.952606] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d480) on tqpair=0x16ed620 00:25:56.555 [2024-10-07 09:46:55.952612] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:56.555 [2024-10-07 09:46:55.952638] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.952643] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.952647] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ed620) 00:25:56.555 [2024-10-07 09:46:55.952654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.555 [2024-10-07 09:46:55.952665] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d480, cid 0, qid 0 00:25:56.555 [2024-10-07 09:46:55.952893] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.555 [2024-10-07 09:46:55.952900] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.555 [2024-10-07 09:46:55.952904] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.952908] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d480) on tqpair=0x16ed620 00:25:56.555 [2024-10-07 09:46:55.952912] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:56.555 [2024-10-07 09:46:55.952917] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:56.555 [2024-10-07 09:46:55.952925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:56.555 [2024-10-07 09:46:55.953031] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:56.555 [2024-10-07 09:46:55.953036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:56.555 [2024-10-07 09:46:55.953045] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.953049] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.953052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ed620) 00:25:56.555 [2024-10-07 09:46:55.953059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.555 [2024-10-07 09:46:55.953070] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d480, cid 0, qid 0 00:25:56.555 [2024-10-07 09:46:55.953294] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.555 [2024-10-07 09:46:55.953300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.555 [2024-10-07 09:46:55.953304] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.953308] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d480) on tqpair=0x16ed620 00:25:56.555 [2024-10-07 09:46:55.953313] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:56.555 [2024-10-07 09:46:55.953322] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.953326] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.953330] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ed620) 00:25:56.555 [2024-10-07 09:46:55.953340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.555 [2024-10-07 09:46:55.953351] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d480, cid 0, qid 0 00:25:56.555 [2024-10-07 09:46:55.953590] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.555 [2024-10-07 09:46:55.953596] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.555 [2024-10-07 09:46:55.953600] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.953604] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d480) on tqpair=0x16ed620 00:25:56.555 [2024-10-07 09:46:55.953609] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:56.555 [2024-10-07 09:46:55.953613] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:56.555 [2024-10-07 09:46:55.953632] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:56.555 [2024-10-07 09:46:55.953641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:56.555 [2024-10-07 09:46:55.953652] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.953656] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ed620) 00:25:56.555 [2024-10-07 09:46:55.953663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.555 [2024-10-07 09:46:55.953674] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d480, cid 0, qid 0 00:25:56.555 [2024-10-07 09:46:55.953934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.555 [2024-10-07 09:46:55.953946] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.555 [2024-10-07 09:46:55.953950] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.953954] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16ed620): datao=0, datal=4096, cccid=0 00:25:56.555 [2024-10-07 09:46:55.953959] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x174d480) on tqpair(0x16ed620): expected_datao=0, payload_size=4096 00:25:56.555 [2024-10-07 09:46:55.953964] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.953972] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.953977] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954146] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.555 [2024-10-07 09:46:55.954153] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.555 [2024-10-07 09:46:55.954156] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954160] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d480) on tqpair=0x16ed620 00:25:56.555 [2024-10-07 09:46:55.954170] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:56.555 [2024-10-07 09:46:55.954174] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:56.555 [2024-10-07 09:46:55.954180] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:56.555 [2024-10-07 09:46:55.954185] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:56.555 [2024-10-07 09:46:55.954190] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:56.555 [2024-10-07 09:46:55.954195] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:56.555 [2024-10-07 09:46:55.954204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:56.555 [2024-10-07 09:46:55.954214] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954221] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954224] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ed620) 00:25:56.555 [2024-10-07 09:46:55.954232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:56.555 [2024-10-07 09:46:55.954243] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d480, cid 0, qid 0 00:25:56.555 [2024-10-07 09:46:55.954516] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.555 [2024-10-07 09:46:55.954523] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.555 [2024-10-07 09:46:55.954527] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954531] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d480) on tqpair=0x16ed620 00:25:56.555 [2024-10-07 09:46:55.954539] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954543] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954547] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ed620) 00:25:56.555 [2024-10-07 09:46:55.954553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.555 [2024-10-07 09:46:55.954559] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954563] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16ed620) 00:25:56.555 [2024-10-07 09:46:55.954573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.555 [2024-10-07 09:46:55.954579] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954583] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.555 [2024-10-07 09:46:55.954586] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16ed620) 00:25:56.555 [2024-10-07 09:46:55.954592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.556 [2024-10-07 09:46:55.954598] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.954602] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.954606] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ed620) 00:25:56.556 [2024-10-07 09:46:55.954611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.556 [2024-10-07 09:46:55.954626] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:56.556 [2024-10-07 09:46:55.954639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:56.556 [2024-10-07 09:46:55.954648] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.954652] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16ed620) 00:25:56.556 [2024-10-07 09:46:55.954659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.556 [2024-10-07 09:46:55.954672] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d480, cid 0, qid 0 00:25:56.556 [2024-10-07 09:46:55.954677] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d600, cid 1, qid 0 00:25:56.556 [2024-10-07 09:46:55.954682] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d780, cid 2, qid 0 00:25:56.556 [2024-10-07 09:46:55.954687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d900, cid 3, qid 0 00:25:56.556 [2024-10-07 09:46:55.954691] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174da80, cid 4, qid 0 00:25:56.556 [2024-10-07 09:46:55.954966] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.556 [2024-10-07 09:46:55.954973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.556 [2024-10-07 09:46:55.954977] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.954981] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174da80) on tqpair=0x16ed620 00:25:56.556 [2024-10-07 09:46:55.954986] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:56.556 [2024-10-07 09:46:55.954991] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:56.556 [2024-10-07 09:46:55.955002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.955007] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16ed620) 00:25:56.556 [2024-10-07 09:46:55.955018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.556 [2024-10-07 09:46:55.955029] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174da80, cid 4, qid 0 00:25:56.556 [2024-10-07 09:46:55.955258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.556 [2024-10-07 09:46:55.955268] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.556 [2024-10-07 09:46:55.955276] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.955281] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16ed620): datao=0, datal=4096, cccid=4 00:25:56.556 [2024-10-07 09:46:55.955285] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x174da80) on tqpair(0x16ed620): expected_datao=0, payload_size=4096 00:25:56.556 [2024-10-07 09:46:55.955289] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.955300] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.955304] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.955474] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.556 [2024-10-07 09:46:55.955482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.556 [2024-10-07 09:46:55.955485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.955489] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174da80) on tqpair=0x16ed620 00:25:56.556 [2024-10-07 09:46:55.955503] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:56.556 [2024-10-07 09:46:55.955535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.955540] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16ed620) 00:25:56.556 [2024-10-07 09:46:55.955546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.556 [2024-10-07 09:46:55.955554] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.955558] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.955561] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16ed620) 00:25:56.556 [2024-10-07 09:46:55.955568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.556 [2024-10-07 09:46:55.955580] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174da80, cid 4, qid 0 00:25:56.556 [2024-10-07 09:46:55.955585] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174dc00, cid 5, qid 0 00:25:56.556 [2024-10-07 09:46:55.959633] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.556 [2024-10-07 09:46:55.959643] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.556 [2024-10-07 09:46:55.959655] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.959659] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16ed620): datao=0, datal=1024, cccid=4 00:25:56.556 [2024-10-07 09:46:55.959664] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x174da80) on tqpair(0x16ed620): expected_datao=0, payload_size=1024 00:25:56.556 [2024-10-07 09:46:55.959669] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.959676] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.959679] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.959685] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.556 [2024-10-07 09:46:55.959691] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.556 [2024-10-07 09:46:55.959695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.959699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174dc00) on tqpair=0x16ed620 00:25:56.556 [2024-10-07 09:46:55.996827] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.556 [2024-10-07 09:46:55.996840] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.556 [2024-10-07 09:46:55.996843] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.996847] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174da80) on tqpair=0x16ed620 00:25:56.556 [2024-10-07 09:46:55.996869] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.996876] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16ed620) 00:25:56.556 [2024-10-07 09:46:55.996885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.556 [2024-10-07 09:46:55.996903] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174da80, cid 4, qid 0 00:25:56.556 [2024-10-07 09:46:55.997108] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.556 [2024-10-07 09:46:55.997118] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.556 [2024-10-07 09:46:55.997122] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.997126] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16ed620): datao=0, datal=3072, cccid=4 00:25:56.556 [2024-10-07 09:46:55.997131] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x174da80) on tqpair(0x16ed620): expected_datao=0, payload_size=3072 00:25:56.556 [2024-10-07 09:46:55.997135] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.997142] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.997146] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.997284] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.556 [2024-10-07 09:46:55.997291] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.556 [2024-10-07 09:46:55.997294] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.997298] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174da80) on tqpair=0x16ed620 00:25:56.556 [2024-10-07 09:46:55.997307] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.997311] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16ed620) 00:25:56.556 [2024-10-07 09:46:55.997318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.556 [2024-10-07 09:46:55.997336] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174da80, cid 4, qid 0 00:25:56.556 [2024-10-07 09:46:55.997570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.556 [2024-10-07 09:46:55.997577] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.556 [2024-10-07 09:46:55.997580] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.997587] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16ed620): datao=0, datal=8, cccid=4 00:25:56.556 [2024-10-07 09:46:55.997592] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x174da80) on tqpair(0x16ed620): expected_datao=0, payload_size=8 00:25:56.556 [2024-10-07 09:46:55.997597] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.997603] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:55.997607] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:56.037833] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.556 [2024-10-07 09:46:56.037850] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.556 [2024-10-07 09:46:56.037854] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.556 [2024-10-07 09:46:56.037858] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174da80) on tqpair=0x16ed620 00:25:56.556 ===================================================== 00:25:56.556 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:56.556 ===================================================== 00:25:56.556 Controller Capabilities/Features 00:25:56.556 ================================ 00:25:56.556 Vendor ID: 0000 00:25:56.556 Subsystem Vendor ID: 0000 00:25:56.556 Serial Number: .................... 00:25:56.556 Model Number: ........................................ 00:25:56.556 Firmware Version: 25.01 00:25:56.556 Recommended Arb Burst: 0 00:25:56.556 IEEE OUI Identifier: 00 00 00 00:25:56.556 Multi-path I/O 00:25:56.556 May have multiple subsystem ports: No 00:25:56.556 May have multiple controllers: No 00:25:56.556 Associated with SR-IOV VF: No 00:25:56.556 Max Data Transfer Size: 131072 00:25:56.556 Max Number of Namespaces: 0 00:25:56.557 Max Number of I/O Queues: 1024 00:25:56.557 NVMe Specification Version (VS): 1.3 00:25:56.557 NVMe Specification Version (Identify): 1.3 00:25:56.557 Maximum Queue Entries: 128 00:25:56.557 Contiguous Queues Required: Yes 00:25:56.557 Arbitration Mechanisms Supported 00:25:56.557 Weighted Round Robin: Not Supported 00:25:56.557 Vendor Specific: Not Supported 00:25:56.557 Reset Timeout: 15000 ms 00:25:56.557 Doorbell Stride: 4 bytes 00:25:56.557 NVM Subsystem Reset: Not Supported 00:25:56.557 Command Sets Supported 00:25:56.557 NVM Command Set: Supported 00:25:56.557 Boot Partition: Not Supported 00:25:56.557 Memory Page Size Minimum: 4096 bytes 00:25:56.557 Memory Page Size Maximum: 4096 bytes 00:25:56.557 Persistent Memory Region: Not Supported 00:25:56.557 Optional Asynchronous Events Supported 00:25:56.557 Namespace Attribute Notices: Not Supported 00:25:56.557 Firmware Activation Notices: Not Supported 00:25:56.557 ANA Change Notices: Not Supported 00:25:56.557 PLE Aggregate Log Change Notices: Not Supported 00:25:56.557 LBA Status Info Alert Notices: Not Supported 00:25:56.557 EGE Aggregate Log Change Notices: Not Supported 00:25:56.557 Normal NVM Subsystem Shutdown event: Not Supported 00:25:56.557 Zone Descriptor Change Notices: Not Supported 00:25:56.557 Discovery Log Change Notices: Supported 00:25:56.557 Controller Attributes 00:25:56.557 128-bit Host Identifier: Not Supported 00:25:56.557 Non-Operational Permissive Mode: Not Supported 00:25:56.557 NVM Sets: Not Supported 00:25:56.557 Read Recovery Levels: Not Supported 00:25:56.557 Endurance Groups: Not Supported 00:25:56.557 Predictable Latency Mode: Not Supported 00:25:56.557 Traffic Based Keep ALive: Not Supported 00:25:56.557 Namespace Granularity: Not Supported 00:25:56.557 SQ Associations: Not Supported 00:25:56.557 UUID List: Not Supported 00:25:56.557 Multi-Domain Subsystem: Not Supported 00:25:56.557 Fixed Capacity Management: Not Supported 00:25:56.557 Variable Capacity Management: Not Supported 00:25:56.557 Delete Endurance Group: Not Supported 00:25:56.557 Delete NVM Set: Not Supported 00:25:56.557 Extended LBA Formats Supported: Not Supported 00:25:56.557 Flexible Data Placement Supported: Not Supported 00:25:56.557 00:25:56.557 Controller Memory Buffer Support 00:25:56.557 ================================ 00:25:56.557 Supported: No 00:25:56.557 00:25:56.557 Persistent Memory Region Support 00:25:56.557 ================================ 00:25:56.557 Supported: No 00:25:56.557 00:25:56.557 Admin Command Set Attributes 00:25:56.557 ============================ 00:25:56.557 Security Send/Receive: Not Supported 00:25:56.557 Format NVM: Not Supported 00:25:56.557 Firmware Activate/Download: Not Supported 00:25:56.557 Namespace Management: Not Supported 00:25:56.557 Device Self-Test: Not Supported 00:25:56.557 Directives: Not Supported 00:25:56.557 NVMe-MI: Not Supported 00:25:56.557 Virtualization Management: Not Supported 00:25:56.557 Doorbell Buffer Config: Not Supported 00:25:56.557 Get LBA Status Capability: Not Supported 00:25:56.557 Command & Feature Lockdown Capability: Not Supported 00:25:56.557 Abort Command Limit: 1 00:25:56.557 Async Event Request Limit: 4 00:25:56.557 Number of Firmware Slots: N/A 00:25:56.557 Firmware Slot 1 Read-Only: N/A 00:25:56.557 Firmware Activation Without Reset: N/A 00:25:56.557 Multiple Update Detection Support: N/A 00:25:56.557 Firmware Update Granularity: No Information Provided 00:25:56.557 Per-Namespace SMART Log: No 00:25:56.557 Asymmetric Namespace Access Log Page: Not Supported 00:25:56.557 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:56.557 Command Effects Log Page: Not Supported 00:25:56.557 Get Log Page Extended Data: Supported 00:25:56.557 Telemetry Log Pages: Not Supported 00:25:56.557 Persistent Event Log Pages: Not Supported 00:25:56.557 Supported Log Pages Log Page: May Support 00:25:56.557 Commands Supported & Effects Log Page: Not Supported 00:25:56.557 Feature Identifiers & Effects Log Page:May Support 00:25:56.557 NVMe-MI Commands & Effects Log Page: May Support 00:25:56.557 Data Area 4 for Telemetry Log: Not Supported 00:25:56.557 Error Log Page Entries Supported: 128 00:25:56.557 Keep Alive: Not Supported 00:25:56.557 00:25:56.557 NVM Command Set Attributes 00:25:56.557 ========================== 00:25:56.557 Submission Queue Entry Size 00:25:56.557 Max: 1 00:25:56.557 Min: 1 00:25:56.557 Completion Queue Entry Size 00:25:56.557 Max: 1 00:25:56.557 Min: 1 00:25:56.557 Number of Namespaces: 0 00:25:56.557 Compare Command: Not Supported 00:25:56.557 Write Uncorrectable Command: Not Supported 00:25:56.557 Dataset Management Command: Not Supported 00:25:56.557 Write Zeroes Command: Not Supported 00:25:56.557 Set Features Save Field: Not Supported 00:25:56.557 Reservations: Not Supported 00:25:56.557 Timestamp: Not Supported 00:25:56.557 Copy: Not Supported 00:25:56.557 Volatile Write Cache: Not Present 00:25:56.557 Atomic Write Unit (Normal): 1 00:25:56.557 Atomic Write Unit (PFail): 1 00:25:56.557 Atomic Compare & Write Unit: 1 00:25:56.557 Fused Compare & Write: Supported 00:25:56.557 Scatter-Gather List 00:25:56.557 SGL Command Set: Supported 00:25:56.557 SGL Keyed: Supported 00:25:56.557 SGL Bit Bucket Descriptor: Not Supported 00:25:56.557 SGL Metadata Pointer: Not Supported 00:25:56.557 Oversized SGL: Not Supported 00:25:56.557 SGL Metadata Address: Not Supported 00:25:56.557 SGL Offset: Supported 00:25:56.557 Transport SGL Data Block: Not Supported 00:25:56.557 Replay Protected Memory Block: Not Supported 00:25:56.557 00:25:56.557 Firmware Slot Information 00:25:56.557 ========================= 00:25:56.557 Active slot: 0 00:25:56.557 00:25:56.557 00:25:56.557 Error Log 00:25:56.557 ========= 00:25:56.557 00:25:56.557 Active Namespaces 00:25:56.557 ================= 00:25:56.557 Discovery Log Page 00:25:56.557 ================== 00:25:56.557 Generation Counter: 2 00:25:56.557 Number of Records: 2 00:25:56.557 Record Format: 0 00:25:56.557 00:25:56.557 Discovery Log Entry 0 00:25:56.557 ---------------------- 00:25:56.557 Transport Type: 3 (TCP) 00:25:56.557 Address Family: 1 (IPv4) 00:25:56.557 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:56.557 Entry Flags: 00:25:56.557 Duplicate Returned Information: 1 00:25:56.557 Explicit Persistent Connection Support for Discovery: 1 00:25:56.557 Transport Requirements: 00:25:56.557 Secure Channel: Not Required 00:25:56.557 Port ID: 0 (0x0000) 00:25:56.557 Controller ID: 65535 (0xffff) 00:25:56.557 Admin Max SQ Size: 128 00:25:56.557 Transport Service Identifier: 4420 00:25:56.557 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:56.557 Transport Address: 10.0.0.2 00:25:56.557 Discovery Log Entry 1 00:25:56.557 ---------------------- 00:25:56.557 Transport Type: 3 (TCP) 00:25:56.557 Address Family: 1 (IPv4) 00:25:56.557 Subsystem Type: 2 (NVM Subsystem) 00:25:56.557 Entry Flags: 00:25:56.557 Duplicate Returned Information: 0 00:25:56.557 Explicit Persistent Connection Support for Discovery: 0 00:25:56.557 Transport Requirements: 00:25:56.557 Secure Channel: Not Required 00:25:56.557 Port ID: 0 (0x0000) 00:25:56.557 Controller ID: 65535 (0xffff) 00:25:56.557 Admin Max SQ Size: 128 00:25:56.557 Transport Service Identifier: 4420 00:25:56.557 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:56.557 Transport Address: 10.0.0.2 [2024-10-07 09:46:56.037962] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:56.557 [2024-10-07 09:46:56.037975] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d480) on tqpair=0x16ed620 00:25:56.557 [2024-10-07 09:46:56.037986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.557 [2024-10-07 09:46:56.037992] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d600) on tqpair=0x16ed620 00:25:56.557 [2024-10-07 09:46:56.037996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.557 [2024-10-07 09:46:56.038001] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d780) on tqpair=0x16ed620 00:25:56.557 [2024-10-07 09:46:56.038006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.557 [2024-10-07 09:46:56.038011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d900) on tqpair=0x16ed620 00:25:56.557 [2024-10-07 09:46:56.038016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.557 [2024-10-07 09:46:56.038026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.038030] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.038034] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ed620) 00:25:56.558 [2024-10-07 09:46:56.038042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.558 [2024-10-07 09:46:56.038059] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d900, cid 3, qid 0 00:25:56.558 [2024-10-07 09:46:56.038303] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.558 [2024-10-07 09:46:56.038310] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.558 [2024-10-07 09:46:56.038314] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.038318] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d900) on tqpair=0x16ed620 00:25:56.558 [2024-10-07 09:46:56.038325] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.038329] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.038333] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ed620) 00:25:56.558 [2024-10-07 09:46:56.038340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.558 [2024-10-07 09:46:56.038356] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d900, cid 3, qid 0 00:25:56.558 [2024-10-07 09:46:56.038553] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.558 [2024-10-07 09:46:56.038560] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.558 [2024-10-07 09:46:56.038563] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.038570] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d900) on tqpair=0x16ed620 00:25:56.558 [2024-10-07 09:46:56.038575] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:56.558 [2024-10-07 09:46:56.038583] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:56.558 [2024-10-07 09:46:56.038593] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.038599] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.038605] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ed620) 00:25:56.558 [2024-10-07 09:46:56.038611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.558 [2024-10-07 09:46:56.042631] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x174d900, cid 3, qid 0 00:25:56.558 [2024-10-07 09:46:56.042862] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.558 [2024-10-07 09:46:56.042869] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.558 [2024-10-07 09:46:56.042872] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.042876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x174d900) on tqpair=0x16ed620 00:25:56.558 [2024-10-07 09:46:56.042885] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:25:56.558 00:25:56.558 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:56.558 [2024-10-07 09:46:56.090519] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:25:56.558 [2024-10-07 09:46:56.090563] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455974 ] 00:25:56.558 [2024-10-07 09:46:56.128705] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:56.558 [2024-10-07 09:46:56.128767] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:56.558 [2024-10-07 09:46:56.128772] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:56.558 [2024-10-07 09:46:56.128787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:56.558 [2024-10-07 09:46:56.128798] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:56.558 [2024-10-07 09:46:56.129486] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:56.558 [2024-10-07 09:46:56.129527] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa1c620 0 00:25:56.558 [2024-10-07 09:46:56.143640] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:56.558 [2024-10-07 09:46:56.143655] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:56.558 [2024-10-07 09:46:56.143661] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:56.558 [2024-10-07 09:46:56.143664] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:56.558 [2024-10-07 09:46:56.143694] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.143700] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.143704] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1c620) 00:25:56.558 [2024-10-07 09:46:56.143724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:56.558 [2024-10-07 09:46:56.143748] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c480, cid 0, qid 0 00:25:56.558 [2024-10-07 09:46:56.151635] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.558 [2024-10-07 09:46:56.151645] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.558 [2024-10-07 09:46:56.151649] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.151654] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c480) on tqpair=0xa1c620 00:25:56.558 [2024-10-07 09:46:56.151663] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:56.558 [2024-10-07 09:46:56.151671] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:56.558 [2024-10-07 09:46:56.151676] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:56.558 [2024-10-07 09:46:56.151691] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.151695] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.151699] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1c620) 00:25:56.558 [2024-10-07 09:46:56.151707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.558 [2024-10-07 09:46:56.151723] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c480, cid 0, qid 0 00:25:56.558 [2024-10-07 09:46:56.151947] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.558 [2024-10-07 09:46:56.151954] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.558 [2024-10-07 09:46:56.151958] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.151962] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c480) on tqpair=0xa1c620 00:25:56.558 [2024-10-07 09:46:56.151967] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:56.558 [2024-10-07 09:46:56.151975] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:56.558 [2024-10-07 09:46:56.151982] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.151986] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.151989] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1c620) 00:25:56.558 [2024-10-07 09:46:56.151996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.558 [2024-10-07 09:46:56.152007] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c480, cid 0, qid 0 00:25:56.558 [2024-10-07 09:46:56.152242] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.558 [2024-10-07 09:46:56.152248] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.558 [2024-10-07 09:46:56.152252] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.152256] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c480) on tqpair=0xa1c620 00:25:56.558 [2024-10-07 09:46:56.152261] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:56.558 [2024-10-07 09:46:56.152269] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:56.558 [2024-10-07 09:46:56.152276] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.152280] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.152284] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1c620) 00:25:56.558 [2024-10-07 09:46:56.152290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.558 [2024-10-07 09:46:56.152305] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c480, cid 0, qid 0 00:25:56.558 [2024-10-07 09:46:56.152544] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.558 [2024-10-07 09:46:56.152551] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.558 [2024-10-07 09:46:56.152554] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.152558] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c480) on tqpair=0xa1c620 00:25:56.558 [2024-10-07 09:46:56.152563] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:56.558 [2024-10-07 09:46:56.152573] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.152577] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.152581] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1c620) 00:25:56.558 [2024-10-07 09:46:56.152587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.558 [2024-10-07 09:46:56.152598] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c480, cid 0, qid 0 00:25:56.558 [2024-10-07 09:46:56.152811] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.558 [2024-10-07 09:46:56.152817] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.558 [2024-10-07 09:46:56.152821] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.558 [2024-10-07 09:46:56.152825] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c480) on tqpair=0xa1c620 00:25:56.558 [2024-10-07 09:46:56.152829] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:56.558 [2024-10-07 09:46:56.152834] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:56.558 [2024-10-07 09:46:56.152842] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:56.559 [2024-10-07 09:46:56.152948] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:56.559 [2024-10-07 09:46:56.152952] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:56.559 [2024-10-07 09:46:56.152960] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.152964] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.152968] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1c620) 00:25:56.559 [2024-10-07 09:46:56.152974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.559 [2024-10-07 09:46:56.152985] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c480, cid 0, qid 0 00:25:56.559 [2024-10-07 09:46:56.153202] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.559 [2024-10-07 09:46:56.153209] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.559 [2024-10-07 09:46:56.153212] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.153216] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c480) on tqpair=0xa1c620 00:25:56.559 [2024-10-07 09:46:56.153221] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:56.559 [2024-10-07 09:46:56.153230] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.153234] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.153237] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1c620) 00:25:56.559 [2024-10-07 09:46:56.153244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.559 [2024-10-07 09:46:56.153257] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c480, cid 0, qid 0 00:25:56.559 [2024-10-07 09:46:56.153453] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.559 [2024-10-07 09:46:56.153460] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.559 [2024-10-07 09:46:56.153463] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.153467] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c480) on tqpair=0xa1c620 00:25:56.559 [2024-10-07 09:46:56.153471] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:56.559 [2024-10-07 09:46:56.153476] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:56.559 [2024-10-07 09:46:56.153484] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:56.559 [2024-10-07 09:46:56.153497] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:56.559 [2024-10-07 09:46:56.153506] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.153510] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1c620) 00:25:56.559 [2024-10-07 09:46:56.153517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.559 [2024-10-07 09:46:56.153528] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c480, cid 0, qid 0 00:25:56.559 [2024-10-07 09:46:56.153793] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.559 [2024-10-07 09:46:56.153800] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.559 [2024-10-07 09:46:56.153803] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.153808] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1c620): datao=0, datal=4096, cccid=0 00:25:56.559 [2024-10-07 09:46:56.153812] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa7c480) on tqpair(0xa1c620): expected_datao=0, payload_size=4096 00:25:56.559 [2024-10-07 09:46:56.153817] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.153832] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.153836] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.197628] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.559 [2024-10-07 09:46:56.197638] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.559 [2024-10-07 09:46:56.197642] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.197646] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c480) on tqpair=0xa1c620 00:25:56.559 [2024-10-07 09:46:56.197655] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:56.559 [2024-10-07 09:46:56.197660] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:56.559 [2024-10-07 09:46:56.197664] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:56.559 [2024-10-07 09:46:56.197669] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:56.559 [2024-10-07 09:46:56.197673] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:56.559 [2024-10-07 09:46:56.197678] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:56.559 [2024-10-07 09:46:56.197687] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:56.559 [2024-10-07 09:46:56.197697] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.197702] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.197705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1c620) 00:25:56.559 [2024-10-07 09:46:56.197713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:56.559 [2024-10-07 09:46:56.197726] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c480, cid 0, qid 0 00:25:56.559 [2024-10-07 09:46:56.197949] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.559 [2024-10-07 09:46:56.197955] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.559 [2024-10-07 09:46:56.197959] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.197963] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c480) on tqpair=0xa1c620 00:25:56.559 [2024-10-07 09:46:56.197970] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.197973] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.197977] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1c620) 00:25:56.559 [2024-10-07 09:46:56.197983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.559 [2024-10-07 09:46:56.197990] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.197994] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.197997] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa1c620) 00:25:56.559 [2024-10-07 09:46:56.198003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.559 [2024-10-07 09:46:56.198010] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.198013] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.198017] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa1c620) 00:25:56.559 [2024-10-07 09:46:56.198023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.559 [2024-10-07 09:46:56.198029] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.198033] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.198036] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.559 [2024-10-07 09:46:56.198042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.559 [2024-10-07 09:46:56.198047] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:56.559 [2024-10-07 09:46:56.198059] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:56.559 [2024-10-07 09:46:56.198065] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.198069] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1c620) 00:25:56.559 [2024-10-07 09:46:56.198076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.559 [2024-10-07 09:46:56.198088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c480, cid 0, qid 0 00:25:56.559 [2024-10-07 09:46:56.198093] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c600, cid 1, qid 0 00:25:56.559 [2024-10-07 09:46:56.198098] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c780, cid 2, qid 0 00:25:56.559 [2024-10-07 09:46:56.198103] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.559 [2024-10-07 09:46:56.198109] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7ca80, cid 4, qid 0 00:25:56.559 [2024-10-07 09:46:56.198341] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.559 [2024-10-07 09:46:56.198347] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.559 [2024-10-07 09:46:56.198351] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.559 [2024-10-07 09:46:56.198355] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7ca80) on tqpair=0xa1c620 00:25:56.560 [2024-10-07 09:46:56.198359] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:56.560 [2024-10-07 09:46:56.198365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.198373] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.198381] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.198388] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.198392] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.198396] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1c620) 00:25:56.560 [2024-10-07 09:46:56.198402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:56.560 [2024-10-07 09:46:56.198412] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7ca80, cid 4, qid 0 00:25:56.560 [2024-10-07 09:46:56.198670] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.560 [2024-10-07 09:46:56.198677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.560 [2024-10-07 09:46:56.198681] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.198685] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7ca80) on tqpair=0xa1c620 00:25:56.560 [2024-10-07 09:46:56.198754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.198765] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.198773] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.198777] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1c620) 00:25:56.560 [2024-10-07 09:46:56.198784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.560 [2024-10-07 09:46:56.198795] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7ca80, cid 4, qid 0 00:25:56.560 [2024-10-07 09:46:56.199018] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.560 [2024-10-07 09:46:56.199025] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.560 [2024-10-07 09:46:56.199028] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199032] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1c620): datao=0, datal=4096, cccid=4 00:25:56.560 [2024-10-07 09:46:56.199037] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa7ca80) on tqpair(0xa1c620): expected_datao=0, payload_size=4096 00:25:56.560 [2024-10-07 09:46:56.199041] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199049] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199053] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199221] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.560 [2024-10-07 09:46:56.199227] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.560 [2024-10-07 09:46:56.199236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7ca80) on tqpair=0xa1c620 00:25:56.560 [2024-10-07 09:46:56.199250] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:56.560 [2024-10-07 09:46:56.199259] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.199269] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.199276] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199280] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1c620) 00:25:56.560 [2024-10-07 09:46:56.199286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.560 [2024-10-07 09:46:56.199297] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7ca80, cid 4, qid 0 00:25:56.560 [2024-10-07 09:46:56.199535] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.560 [2024-10-07 09:46:56.199542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.560 [2024-10-07 09:46:56.199545] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199549] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1c620): datao=0, datal=4096, cccid=4 00:25:56.560 [2024-10-07 09:46:56.199554] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa7ca80) on tqpair(0xa1c620): expected_datao=0, payload_size=4096 00:25:56.560 [2024-10-07 09:46:56.199558] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199565] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199568] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199776] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.560 [2024-10-07 09:46:56.199783] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.560 [2024-10-07 09:46:56.199787] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199790] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7ca80) on tqpair=0xa1c620 00:25:56.560 [2024-10-07 09:46:56.199804] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.199813] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.199821] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.199824] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1c620) 00:25:56.560 [2024-10-07 09:46:56.199831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.560 [2024-10-07 09:46:56.199842] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7ca80, cid 4, qid 0 00:25:56.560 [2024-10-07 09:46:56.200039] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.560 [2024-10-07 09:46:56.200045] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.560 [2024-10-07 09:46:56.200049] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200052] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1c620): datao=0, datal=4096, cccid=4 00:25:56.560 [2024-10-07 09:46:56.200057] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa7ca80) on tqpair(0xa1c620): expected_datao=0, payload_size=4096 00:25:56.560 [2024-10-07 09:46:56.200061] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200070] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200074] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200280] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.560 [2024-10-07 09:46:56.200286] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.560 [2024-10-07 09:46:56.200289] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200293] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7ca80) on tqpair=0xa1c620 00:25:56.560 [2024-10-07 09:46:56.200300] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.200309] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.200317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.200323] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.200328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.200334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.200339] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:56.560 [2024-10-07 09:46:56.200344] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:56.560 [2024-10-07 09:46:56.200349] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:56.560 [2024-10-07 09:46:56.200366] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200370] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1c620) 00:25:56.560 [2024-10-07 09:46:56.200377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.560 [2024-10-07 09:46:56.200384] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200388] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa1c620) 00:25:56.560 [2024-10-07 09:46:56.200398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.560 [2024-10-07 09:46:56.200410] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7ca80, cid 4, qid 0 00:25:56.560 [2024-10-07 09:46:56.200416] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7cc00, cid 5, qid 0 00:25:56.560 [2024-10-07 09:46:56.200634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.560 [2024-10-07 09:46:56.200641] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.560 [2024-10-07 09:46:56.200645] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200649] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7ca80) on tqpair=0xa1c620 00:25:56.560 [2024-10-07 09:46:56.200655] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.560 [2024-10-07 09:46:56.200661] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.560 [2024-10-07 09:46:56.200665] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200668] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7cc00) on tqpair=0xa1c620 00:25:56.560 [2024-10-07 09:46:56.200678] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.560 [2024-10-07 09:46:56.200685] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa1c620) 00:25:56.560 [2024-10-07 09:46:56.200691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.560 [2024-10-07 09:46:56.200702] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7cc00, cid 5, qid 0 00:25:56.560 [2024-10-07 09:46:56.200936] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.560 [2024-10-07 09:46:56.200942] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.561 [2024-10-07 09:46:56.200946] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.200950] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7cc00) on tqpair=0xa1c620 00:25:56.561 [2024-10-07 09:46:56.200959] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.200963] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa1c620) 00:25:56.561 [2024-10-07 09:46:56.200970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.561 [2024-10-07 09:46:56.200980] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7cc00, cid 5, qid 0 00:25:56.561 [2024-10-07 09:46:56.201222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.561 [2024-10-07 09:46:56.201228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.561 [2024-10-07 09:46:56.201232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.201236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7cc00) on tqpair=0xa1c620 00:25:56.561 [2024-10-07 09:46:56.201245] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.201249] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa1c620) 00:25:56.561 [2024-10-07 09:46:56.201256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.561 [2024-10-07 09:46:56.201266] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7cc00, cid 5, qid 0 00:25:56.561 [2024-10-07 09:46:56.201491] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.561 [2024-10-07 09:46:56.201497] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.561 [2024-10-07 09:46:56.201500] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.201504] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7cc00) on tqpair=0xa1c620 00:25:56.561 [2024-10-07 09:46:56.201519] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.201524] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa1c620) 00:25:56.561 [2024-10-07 09:46:56.201530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.561 [2024-10-07 09:46:56.201538] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.201542] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1c620) 00:25:56.561 [2024-10-07 09:46:56.201548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.561 [2024-10-07 09:46:56.201555] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.201559] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa1c620) 00:25:56.561 [2024-10-07 09:46:56.201565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.561 [2024-10-07 09:46:56.201575] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.201579] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa1c620) 00:25:56.561 [2024-10-07 09:46:56.201587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.561 [2024-10-07 09:46:56.201599] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7cc00, cid 5, qid 0 00:25:56.561 [2024-10-07 09:46:56.201604] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7ca80, cid 4, qid 0 00:25:56.561 [2024-10-07 09:46:56.201609] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7cd80, cid 6, qid 0 00:25:56.561 [2024-10-07 09:46:56.201614] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7cf00, cid 7, qid 0 00:25:56.561 [2024-10-07 09:46:56.205636] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.561 [2024-10-07 09:46:56.205642] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.561 [2024-10-07 09:46:56.205646] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205649] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1c620): datao=0, datal=8192, cccid=5 00:25:56.561 [2024-10-07 09:46:56.205654] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa7cc00) on tqpair(0xa1c620): expected_datao=0, payload_size=8192 00:25:56.561 [2024-10-07 09:46:56.205658] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205666] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205670] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205675] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.561 [2024-10-07 09:46:56.205681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.561 [2024-10-07 09:46:56.205684] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205688] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1c620): datao=0, datal=512, cccid=4 00:25:56.561 [2024-10-07 09:46:56.205693] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa7ca80) on tqpair(0xa1c620): expected_datao=0, payload_size=512 00:25:56.561 [2024-10-07 09:46:56.205697] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205703] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205707] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205713] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.561 [2024-10-07 09:46:56.205719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.561 [2024-10-07 09:46:56.205722] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205726] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1c620): datao=0, datal=512, cccid=6 00:25:56.561 [2024-10-07 09:46:56.205730] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa7cd80) on tqpair(0xa1c620): expected_datao=0, payload_size=512 00:25:56.561 [2024-10-07 09:46:56.205734] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205741] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205744] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205750] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:56.561 [2024-10-07 09:46:56.205756] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:56.561 [2024-10-07 09:46:56.205759] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205763] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1c620): datao=0, datal=4096, cccid=7 00:25:56.561 [2024-10-07 09:46:56.205767] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa7cf00) on tqpair(0xa1c620): expected_datao=0, payload_size=4096 00:25:56.561 [2024-10-07 09:46:56.205771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205778] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205782] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205790] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.561 [2024-10-07 09:46:56.205795] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.561 [2024-10-07 09:46:56.205799] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205803] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7cc00) on tqpair=0xa1c620 00:25:56.561 [2024-10-07 09:46:56.205815] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.561 [2024-10-07 09:46:56.205821] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.561 [2024-10-07 09:46:56.205825] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205828] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7ca80) on tqpair=0xa1c620 00:25:56.561 [2024-10-07 09:46:56.205839] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.561 [2024-10-07 09:46:56.205845] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.561 [2024-10-07 09:46:56.205848] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205852] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7cd80) on tqpair=0xa1c620 00:25:56.561 [2024-10-07 09:46:56.205859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.561 [2024-10-07 09:46:56.205865] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.561 [2024-10-07 09:46:56.205868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.561 [2024-10-07 09:46:56.205872] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7cf00) on tqpair=0xa1c620 00:25:56.561 ===================================================== 00:25:56.561 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.561 ===================================================== 00:25:56.561 Controller Capabilities/Features 00:25:56.561 ================================ 00:25:56.561 Vendor ID: 8086 00:25:56.561 Subsystem Vendor ID: 8086 00:25:56.561 Serial Number: SPDK00000000000001 00:25:56.561 Model Number: SPDK bdev Controller 00:25:56.561 Firmware Version: 25.01 00:25:56.561 Recommended Arb Burst: 6 00:25:56.561 IEEE OUI Identifier: e4 d2 5c 00:25:56.561 Multi-path I/O 00:25:56.561 May have multiple subsystem ports: Yes 00:25:56.561 May have multiple controllers: Yes 00:25:56.561 Associated with SR-IOV VF: No 00:25:56.561 Max Data Transfer Size: 131072 00:25:56.561 Max Number of Namespaces: 32 00:25:56.561 Max Number of I/O Queues: 127 00:25:56.561 NVMe Specification Version (VS): 1.3 00:25:56.561 NVMe Specification Version (Identify): 1.3 00:25:56.561 Maximum Queue Entries: 128 00:25:56.561 Contiguous Queues Required: Yes 00:25:56.561 Arbitration Mechanisms Supported 00:25:56.561 Weighted Round Robin: Not Supported 00:25:56.561 Vendor Specific: Not Supported 00:25:56.561 Reset Timeout: 15000 ms 00:25:56.561 Doorbell Stride: 4 bytes 00:25:56.561 NVM Subsystem Reset: Not Supported 00:25:56.561 Command Sets Supported 00:25:56.561 NVM Command Set: Supported 00:25:56.561 Boot Partition: Not Supported 00:25:56.561 Memory Page Size Minimum: 4096 bytes 00:25:56.561 Memory Page Size Maximum: 4096 bytes 00:25:56.561 Persistent Memory Region: Not Supported 00:25:56.561 Optional Asynchronous Events Supported 00:25:56.561 Namespace Attribute Notices: Supported 00:25:56.561 Firmware Activation Notices: Not Supported 00:25:56.561 ANA Change Notices: Not Supported 00:25:56.561 PLE Aggregate Log Change Notices: Not Supported 00:25:56.561 LBA Status Info Alert Notices: Not Supported 00:25:56.561 EGE Aggregate Log Change Notices: Not Supported 00:25:56.561 Normal NVM Subsystem Shutdown event: Not Supported 00:25:56.561 Zone Descriptor Change Notices: Not Supported 00:25:56.562 Discovery Log Change Notices: Not Supported 00:25:56.562 Controller Attributes 00:25:56.562 128-bit Host Identifier: Supported 00:25:56.562 Non-Operational Permissive Mode: Not Supported 00:25:56.562 NVM Sets: Not Supported 00:25:56.562 Read Recovery Levels: Not Supported 00:25:56.562 Endurance Groups: Not Supported 00:25:56.562 Predictable Latency Mode: Not Supported 00:25:56.562 Traffic Based Keep ALive: Not Supported 00:25:56.562 Namespace Granularity: Not Supported 00:25:56.562 SQ Associations: Not Supported 00:25:56.562 UUID List: Not Supported 00:25:56.562 Multi-Domain Subsystem: Not Supported 00:25:56.562 Fixed Capacity Management: Not Supported 00:25:56.562 Variable Capacity Management: Not Supported 00:25:56.562 Delete Endurance Group: Not Supported 00:25:56.562 Delete NVM Set: Not Supported 00:25:56.562 Extended LBA Formats Supported: Not Supported 00:25:56.562 Flexible Data Placement Supported: Not Supported 00:25:56.562 00:25:56.562 Controller Memory Buffer Support 00:25:56.562 ================================ 00:25:56.562 Supported: No 00:25:56.562 00:25:56.562 Persistent Memory Region Support 00:25:56.562 ================================ 00:25:56.562 Supported: No 00:25:56.562 00:25:56.562 Admin Command Set Attributes 00:25:56.562 ============================ 00:25:56.562 Security Send/Receive: Not Supported 00:25:56.562 Format NVM: Not Supported 00:25:56.562 Firmware Activate/Download: Not Supported 00:25:56.562 Namespace Management: Not Supported 00:25:56.562 Device Self-Test: Not Supported 00:25:56.562 Directives: Not Supported 00:25:56.562 NVMe-MI: Not Supported 00:25:56.562 Virtualization Management: Not Supported 00:25:56.562 Doorbell Buffer Config: Not Supported 00:25:56.562 Get LBA Status Capability: Not Supported 00:25:56.562 Command & Feature Lockdown Capability: Not Supported 00:25:56.562 Abort Command Limit: 4 00:25:56.562 Async Event Request Limit: 4 00:25:56.562 Number of Firmware Slots: N/A 00:25:56.562 Firmware Slot 1 Read-Only: N/A 00:25:56.562 Firmware Activation Without Reset: N/A 00:25:56.562 Multiple Update Detection Support: N/A 00:25:56.562 Firmware Update Granularity: No Information Provided 00:25:56.562 Per-Namespace SMART Log: No 00:25:56.562 Asymmetric Namespace Access Log Page: Not Supported 00:25:56.562 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:56.562 Command Effects Log Page: Supported 00:25:56.562 Get Log Page Extended Data: Supported 00:25:56.562 Telemetry Log Pages: Not Supported 00:25:56.562 Persistent Event Log Pages: Not Supported 00:25:56.562 Supported Log Pages Log Page: May Support 00:25:56.562 Commands Supported & Effects Log Page: Not Supported 00:25:56.562 Feature Identifiers & Effects Log Page:May Support 00:25:56.562 NVMe-MI Commands & Effects Log Page: May Support 00:25:56.562 Data Area 4 for Telemetry Log: Not Supported 00:25:56.562 Error Log Page Entries Supported: 128 00:25:56.562 Keep Alive: Supported 00:25:56.562 Keep Alive Granularity: 10000 ms 00:25:56.562 00:25:56.562 NVM Command Set Attributes 00:25:56.562 ========================== 00:25:56.562 Submission Queue Entry Size 00:25:56.562 Max: 64 00:25:56.562 Min: 64 00:25:56.562 Completion Queue Entry Size 00:25:56.562 Max: 16 00:25:56.562 Min: 16 00:25:56.562 Number of Namespaces: 32 00:25:56.562 Compare Command: Supported 00:25:56.562 Write Uncorrectable Command: Not Supported 00:25:56.562 Dataset Management Command: Supported 00:25:56.562 Write Zeroes Command: Supported 00:25:56.562 Set Features Save Field: Not Supported 00:25:56.562 Reservations: Supported 00:25:56.562 Timestamp: Not Supported 00:25:56.562 Copy: Supported 00:25:56.562 Volatile Write Cache: Present 00:25:56.562 Atomic Write Unit (Normal): 1 00:25:56.562 Atomic Write Unit (PFail): 1 00:25:56.562 Atomic Compare & Write Unit: 1 00:25:56.562 Fused Compare & Write: Supported 00:25:56.562 Scatter-Gather List 00:25:56.562 SGL Command Set: Supported 00:25:56.562 SGL Keyed: Supported 00:25:56.562 SGL Bit Bucket Descriptor: Not Supported 00:25:56.562 SGL Metadata Pointer: Not Supported 00:25:56.562 Oversized SGL: Not Supported 00:25:56.562 SGL Metadata Address: Not Supported 00:25:56.562 SGL Offset: Supported 00:25:56.562 Transport SGL Data Block: Not Supported 00:25:56.562 Replay Protected Memory Block: Not Supported 00:25:56.562 00:25:56.562 Firmware Slot Information 00:25:56.562 ========================= 00:25:56.562 Active slot: 1 00:25:56.562 Slot 1 Firmware Revision: 25.01 00:25:56.562 00:25:56.562 00:25:56.562 Commands Supported and Effects 00:25:56.562 ============================== 00:25:56.562 Admin Commands 00:25:56.562 -------------- 00:25:56.562 Get Log Page (02h): Supported 00:25:56.562 Identify (06h): Supported 00:25:56.562 Abort (08h): Supported 00:25:56.562 Set Features (09h): Supported 00:25:56.562 Get Features (0Ah): Supported 00:25:56.562 Asynchronous Event Request (0Ch): Supported 00:25:56.562 Keep Alive (18h): Supported 00:25:56.562 I/O Commands 00:25:56.562 ------------ 00:25:56.562 Flush (00h): Supported LBA-Change 00:25:56.562 Write (01h): Supported LBA-Change 00:25:56.562 Read (02h): Supported 00:25:56.562 Compare (05h): Supported 00:25:56.562 Write Zeroes (08h): Supported LBA-Change 00:25:56.562 Dataset Management (09h): Supported LBA-Change 00:25:56.562 Copy (19h): Supported LBA-Change 00:25:56.562 00:25:56.562 Error Log 00:25:56.562 ========= 00:25:56.562 00:25:56.562 Arbitration 00:25:56.562 =========== 00:25:56.562 Arbitration Burst: 1 00:25:56.562 00:25:56.562 Power Management 00:25:56.562 ================ 00:25:56.562 Number of Power States: 1 00:25:56.562 Current Power State: Power State #0 00:25:56.562 Power State #0: 00:25:56.562 Max Power: 0.00 W 00:25:56.562 Non-Operational State: Operational 00:25:56.562 Entry Latency: Not Reported 00:25:56.562 Exit Latency: Not Reported 00:25:56.562 Relative Read Throughput: 0 00:25:56.562 Relative Read Latency: 0 00:25:56.562 Relative Write Throughput: 0 00:25:56.562 Relative Write Latency: 0 00:25:56.562 Idle Power: Not Reported 00:25:56.562 Active Power: Not Reported 00:25:56.562 Non-Operational Permissive Mode: Not Supported 00:25:56.562 00:25:56.562 Health Information 00:25:56.562 ================== 00:25:56.562 Critical Warnings: 00:25:56.562 Available Spare Space: OK 00:25:56.562 Temperature: OK 00:25:56.562 Device Reliability: OK 00:25:56.562 Read Only: No 00:25:56.562 Volatile Memory Backup: OK 00:25:56.562 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:56.562 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:56.562 Available Spare: 0% 00:25:56.562 Available Spare Threshold: 0% 00:25:56.562 Life Percentage Used:[2024-10-07 09:46:56.205977] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.562 [2024-10-07 09:46:56.205983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa1c620) 00:25:56.562 [2024-10-07 09:46:56.205989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.562 [2024-10-07 09:46:56.206003] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7cf00, cid 7, qid 0 00:25:56.562 [2024-10-07 09:46:56.206226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.562 [2024-10-07 09:46:56.206233] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.562 [2024-10-07 09:46:56.206236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.562 [2024-10-07 09:46:56.206240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7cf00) on tqpair=0xa1c620 00:25:56.562 [2024-10-07 09:46:56.206273] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:56.562 [2024-10-07 09:46:56.206283] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c480) on tqpair=0xa1c620 00:25:56.562 [2024-10-07 09:46:56.206290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.562 [2024-10-07 09:46:56.206296] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c600) on tqpair=0xa1c620 00:25:56.562 [2024-10-07 09:46:56.206300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.562 [2024-10-07 09:46:56.206305] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c780) on tqpair=0xa1c620 00:25:56.562 [2024-10-07 09:46:56.206310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.562 [2024-10-07 09:46:56.206315] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.562 [2024-10-07 09:46:56.206320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.562 [2024-10-07 09:46:56.206328] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.562 [2024-10-07 09:46:56.206332] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.562 [2024-10-07 09:46:56.206336] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.562 [2024-10-07 09:46:56.206345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.562 [2024-10-07 09:46:56.206358] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.562 [2024-10-07 09:46:56.206575] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.562 [2024-10-07 09:46:56.206581] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.206585] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.206589] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.206596] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.206600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.206604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.563 [2024-10-07 09:46:56.206610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.563 [2024-10-07 09:46:56.206633] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.563 [2024-10-07 09:46:56.206877] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.563 [2024-10-07 09:46:56.206883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.206887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.206891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.206896] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:56.563 [2024-10-07 09:46:56.206900] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:56.563 [2024-10-07 09:46:56.206910] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.206914] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.206917] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.563 [2024-10-07 09:46:56.206924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.563 [2024-10-07 09:46:56.206934] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.563 [2024-10-07 09:46:56.207142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.563 [2024-10-07 09:46:56.207149] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.207152] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207156] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.207166] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207170] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207174] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.563 [2024-10-07 09:46:56.207181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.563 [2024-10-07 09:46:56.207191] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.563 [2024-10-07 09:46:56.207379] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.563 [2024-10-07 09:46:56.207386] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.207389] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207393] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.207403] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207409] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207413] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.563 [2024-10-07 09:46:56.207420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.563 [2024-10-07 09:46:56.207430] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.563 [2024-10-07 09:46:56.207633] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.563 [2024-10-07 09:46:56.207640] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.207643] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207647] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.207657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207661] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207664] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.563 [2024-10-07 09:46:56.207671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.563 [2024-10-07 09:46:56.207681] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.563 [2024-10-07 09:46:56.207935] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.563 [2024-10-07 09:46:56.207941] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.207945] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207948] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.207958] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207962] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.207966] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.563 [2024-10-07 09:46:56.207972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.563 [2024-10-07 09:46:56.207982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.563 [2024-10-07 09:46:56.208194] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.563 [2024-10-07 09:46:56.208201] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.208204] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.208208] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.208218] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.208222] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.208226] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.563 [2024-10-07 09:46:56.208232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.563 [2024-10-07 09:46:56.208242] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.563 [2024-10-07 09:46:56.208488] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.563 [2024-10-07 09:46:56.208494] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.208498] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.208502] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.208512] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.208516] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.208522] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.563 [2024-10-07 09:46:56.208529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.563 [2024-10-07 09:46:56.208539] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.563 [2024-10-07 09:46:56.208791] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.563 [2024-10-07 09:46:56.208797] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.208801] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.208805] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.208814] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.208818] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.208822] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.563 [2024-10-07 09:46:56.208829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.563 [2024-10-07 09:46:56.208839] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.563 [2024-10-07 09:46:56.209044] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.563 [2024-10-07 09:46:56.209051] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.209054] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.209058] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.209068] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.209072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.209075] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.563 [2024-10-07 09:46:56.209082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.563 [2024-10-07 09:46:56.209092] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.563 [2024-10-07 09:46:56.209308] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.563 [2024-10-07 09:46:56.209315] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.563 [2024-10-07 09:46:56.209318] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.209322] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.563 [2024-10-07 09:46:56.209333] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.209336] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.563 [2024-10-07 09:46:56.209340] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.564 [2024-10-07 09:46:56.209347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.564 [2024-10-07 09:46:56.209356] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.564 [2024-10-07 09:46:56.209546] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.564 [2024-10-07 09:46:56.209553] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.564 [2024-10-07 09:46:56.209556] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.564 [2024-10-07 09:46:56.209560] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.564 [2024-10-07 09:46:56.209570] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:56.564 [2024-10-07 09:46:56.209574] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:56.564 [2024-10-07 09:46:56.209577] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1c620) 00:25:56.564 [2024-10-07 09:46:56.209586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.564 [2024-10-07 09:46:56.209597] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa7c900, cid 3, qid 0 00:25:56.824 [2024-10-07 09:46:56.213631] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:56.824 [2024-10-07 09:46:56.213642] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:56.824 [2024-10-07 09:46:56.213645] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:56.825 [2024-10-07 09:46:56.213649] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa7c900) on tqpair=0xa1c620 00:25:56.825 [2024-10-07 09:46:56.213657] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:25:56.825 0% 00:25:56.825 Data Units Read: 0 00:25:56.825 Data Units Written: 0 00:25:56.825 Host Read Commands: 0 00:25:56.825 Host Write Commands: 0 00:25:56.825 Controller Busy Time: 0 minutes 00:25:56.825 Power Cycles: 0 00:25:56.825 Power On Hours: 0 hours 00:25:56.825 Unsafe Shutdowns: 0 00:25:56.825 Unrecoverable Media Errors: 0 00:25:56.825 Lifetime Error Log Entries: 0 00:25:56.825 Warning Temperature Time: 0 minutes 00:25:56.825 Critical Temperature Time: 0 minutes 00:25:56.825 00:25:56.825 Number of Queues 00:25:56.825 ================ 00:25:56.825 Number of I/O Submission Queues: 127 00:25:56.825 Number of I/O Completion Queues: 127 00:25:56.825 00:25:56.825 Active Namespaces 00:25:56.825 ================= 00:25:56.825 Namespace ID:1 00:25:56.825 Error Recovery Timeout: Unlimited 00:25:56.825 Command Set Identifier: NVM (00h) 00:25:56.825 Deallocate: Supported 00:25:56.825 Deallocated/Unwritten Error: Not Supported 00:25:56.825 Deallocated Read Value: Unknown 00:25:56.825 Deallocate in Write Zeroes: Not Supported 00:25:56.825 Deallocated Guard Field: 0xFFFF 00:25:56.825 Flush: Supported 00:25:56.825 Reservation: Supported 00:25:56.825 Namespace Sharing Capabilities: Multiple Controllers 00:25:56.825 Size (in LBAs): 131072 (0GiB) 00:25:56.825 Capacity (in LBAs): 131072 (0GiB) 00:25:56.825 Utilization (in LBAs): 131072 (0GiB) 00:25:56.825 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:56.825 EUI64: ABCDEF0123456789 00:25:56.825 UUID: 91957016-2bf7-4588-b2dc-77803a2049e6 00:25:56.825 Thin Provisioning: Not Supported 00:25:56.825 Per-NS Atomic Units: Yes 00:25:56.825 Atomic Boundary Size (Normal): 0 00:25:56.825 Atomic Boundary Size (PFail): 0 00:25:56.825 Atomic Boundary Offset: 0 00:25:56.825 Maximum Single Source Range Length: 65535 00:25:56.825 Maximum Copy Length: 65535 00:25:56.825 Maximum Source Range Count: 1 00:25:56.825 NGUID/EUI64 Never Reused: No 00:25:56.825 Namespace Write Protected: No 00:25:56.825 Number of LBA Formats: 1 00:25:56.825 Current LBA Format: LBA Format #00 00:25:56.825 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:56.825 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.825 rmmod nvme_tcp 00:25:56.825 rmmod nvme_fabrics 00:25:56.825 rmmod nvme_keyring 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 3455625 ']' 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 3455625 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' -z 3455625 ']' 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # kill -0 3455625 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # uname 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3455625 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3455625' 00:25:56.825 killing process with pid 3455625 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # kill 3455625 00:25:56.825 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@977 -- # wait 3455625 00:25:57.086 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.087 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:59.631 00:25:59.631 real 0m11.933s 00:25:59.631 user 0m8.386s 00:25:59.631 sys 0m6.364s 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:59.631 ************************************ 00:25:59.631 END TEST nvmf_identify 00:25:59.631 ************************************ 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.631 ************************************ 00:25:59.631 START TEST nvmf_perf 00:25:59.631 ************************************ 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:59.631 * Looking for test storage... 00:25:59.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1626 -- # lcov --version 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.631 09:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:25:59.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.631 --rc genhtml_branch_coverage=1 00:25:59.631 --rc genhtml_function_coverage=1 00:25:59.631 --rc genhtml_legend=1 00:25:59.631 --rc geninfo_all_blocks=1 00:25:59.631 --rc geninfo_unexecuted_blocks=1 00:25:59.631 00:25:59.631 ' 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:25:59.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.631 --rc genhtml_branch_coverage=1 00:25:59.631 --rc genhtml_function_coverage=1 00:25:59.631 --rc genhtml_legend=1 00:25:59.631 --rc geninfo_all_blocks=1 00:25:59.631 --rc geninfo_unexecuted_blocks=1 00:25:59.631 00:25:59.631 ' 00:25:59.631 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:25:59.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.631 --rc genhtml_branch_coverage=1 00:25:59.631 --rc genhtml_function_coverage=1 00:25:59.632 --rc genhtml_legend=1 00:25:59.632 --rc geninfo_all_blocks=1 00:25:59.632 --rc geninfo_unexecuted_blocks=1 00:25:59.632 00:25:59.632 ' 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:25:59.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.632 --rc genhtml_branch_coverage=1 00:25:59.632 --rc genhtml_function_coverage=1 00:25:59.632 --rc genhtml_legend=1 00:25:59.632 --rc geninfo_all_blocks=1 00:25:59.632 --rc geninfo_unexecuted_blocks=1 00:25:59.632 00:25:59.632 ' 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:59.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:59.632 09:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:07.779 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:07.779 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.779 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:07.780 Found net devices under 0000:31:00.0: cvl_0_0 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:07.780 Found net devices under 0000:31:00.1: cvl_0_1 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:07.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:26:07.780 00:26:07.780 --- 10.0.0.2 ping statistics --- 00:26:07.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.780 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:26:07.780 00:26:07.780 --- 10.0.0.1 ping statistics --- 00:26:07.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.780 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=3460371 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 3460371 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # '[' -z 3460371 ']' 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local max_retries=100 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@843 -- # xtrace_disable 00:26:07.780 09:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:07.780 [2024-10-07 09:47:06.900927] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:26:07.780 [2024-10-07 09:47:06.900994] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.780 [2024-10-07 09:47:06.989380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:07.780 [2024-10-07 09:47:07.085502] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.780 [2024-10-07 09:47:07.085565] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.780 [2024-10-07 09:47:07.085574] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.781 [2024-10-07 09:47:07.085583] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.781 [2024-10-07 09:47:07.085589] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.781 [2024-10-07 09:47:07.087669] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.781 [2024-10-07 09:47:07.087917] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.781 [2024-10-07 09:47:07.088083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:07.781 [2024-10-07 09:47:07.088085] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.355 09:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:26:08.355 09:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@867 -- # return 0 00:26:08.355 09:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:08.355 09:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@733 -- # xtrace_disable 00:26:08.355 09:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:08.355 09:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.355 09:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:08.355 09:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:08.928 09:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:08.928 09:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:08.928 09:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:08.928 09:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:09.190 09:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:09.190 09:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:09.190 09:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:09.190 09:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:09.190 09:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:09.450 [2024-10-07 09:47:08.884333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.450 09:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:09.711 09:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:09.711 09:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:09.711 09:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:09.711 09:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:09.971 09:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.233 [2024-10-07 09:47:09.639267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.233 09:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:10.233 09:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:10.233 09:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:10.233 09:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:10.233 09:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:11.616 Initializing NVMe Controllers 00:26:11.616 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:11.616 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:11.616 Initialization complete. Launching workers. 00:26:11.616 ======================================================== 00:26:11.616 Latency(us) 00:26:11.616 Device Information : IOPS MiB/s Average min max 00:26:11.616 PCIE (0000:65:00.0) NSID 1 from core 0: 78529.98 306.76 406.88 13.33 5965.53 00:26:11.616 ======================================================== 00:26:11.616 Total : 78529.98 306.76 406.88 13.33 5965.53 00:26:11.616 00:26:11.616 09:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:13.000 Initializing NVMe Controllers 00:26:13.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:13.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:13.000 Initialization complete. Launching workers. 00:26:13.000 ======================================================== 00:26:13.000 Latency(us) 00:26:13.000 Device Information : IOPS MiB/s Average min max 00:26:13.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 0.31 12856.78 226.75 45608.61 00:26:13.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 22706.70 7956.89 47893.54 00:26:13.000 ======================================================== 00:26:13.000 Total : 126.00 0.49 16452.78 226.75 47893.54 00:26:13.000 00:26:13.000 09:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:14.388 Initializing NVMe Controllers 00:26:14.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:14.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:14.388 Initialization complete. Launching workers. 00:26:14.388 ======================================================== 00:26:14.388 Latency(us) 00:26:14.388 Device Information : IOPS MiB/s Average min max 00:26:14.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11750.00 45.90 2724.91 357.85 9775.54 00:26:14.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3799.00 14.84 8477.86 4425.28 19089.62 00:26:14.388 ======================================================== 00:26:14.388 Total : 15549.00 60.74 4130.49 357.85 19089.62 00:26:14.388 00:26:14.388 09:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:14.388 09:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:14.388 09:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:16.935 Initializing NVMe Controllers 00:26:16.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:16.935 Controller IO queue size 128, less than required. 00:26:16.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:16.935 Controller IO queue size 128, less than required. 00:26:16.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:16.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:16.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:16.935 Initialization complete. Launching workers. 00:26:16.935 ======================================================== 00:26:16.935 Latency(us) 00:26:16.935 Device Information : IOPS MiB/s Average min max 00:26:16.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2301.98 575.49 56428.39 34466.79 100692.83 00:26:16.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.86 153.72 213634.97 48263.70 309400.73 00:26:16.935 ======================================================== 00:26:16.935 Total : 2916.84 729.21 89567.05 34466.79 309400.73 00:26:16.935 00:26:16.935 09:47:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:17.196 No valid NVMe controllers or AIO or URING devices found 00:26:17.196 Initializing NVMe Controllers 00:26:17.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.197 Controller IO queue size 128, less than required. 00:26:17.197 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.197 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:17.197 Controller IO queue size 128, less than required. 00:26:17.197 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.197 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:17.197 WARNING: Some requested NVMe devices were skipped 00:26:17.197 09:47:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:20.497 Initializing NVMe Controllers 00:26:20.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:20.497 Controller IO queue size 128, less than required. 00:26:20.497 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:20.497 Controller IO queue size 128, less than required. 00:26:20.497 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:20.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:20.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:20.497 Initialization complete. Launching workers. 00:26:20.497 00:26:20.497 ==================== 00:26:20.497 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:20.497 TCP transport: 00:26:20.497 polls: 32949 00:26:20.497 idle_polls: 21781 00:26:20.497 sock_completions: 11168 00:26:20.497 nvme_completions: 6735 00:26:20.497 submitted_requests: 10164 00:26:20.497 queued_requests: 1 00:26:20.497 00:26:20.497 ==================== 00:26:20.497 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:20.497 TCP transport: 00:26:20.497 polls: 35678 00:26:20.497 idle_polls: 24210 00:26:20.497 sock_completions: 11468 00:26:20.497 nvme_completions: 9343 00:26:20.497 submitted_requests: 13890 00:26:20.497 queued_requests: 1 00:26:20.497 ======================================================== 00:26:20.497 Latency(us) 00:26:20.497 Device Information : IOPS MiB/s Average min max 00:26:20.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1683.44 420.86 77736.99 34706.35 146288.54 00:26:20.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2335.42 583.85 55087.39 30512.47 95158.38 00:26:20.497 ======================================================== 00:26:20.497 Total : 4018.86 1004.71 64574.97 30512.47 146288.54 00:26:20.497 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:20.497 rmmod nvme_tcp 00:26:20.497 rmmod nvme_fabrics 00:26:20.497 rmmod nvme_keyring 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 3460371 ']' 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 3460371 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' -z 3460371 ']' 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # kill -0 3460371 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # uname 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3460371 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3460371' 00:26:20.497 killing process with pid 3460371 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # kill 3460371 00:26:20.497 09:47:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@977 -- # wait 3460371 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.406 09:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.317 09:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:24.317 00:26:24.317 real 0m25.131s 00:26:24.317 user 1m0.361s 00:26:24.317 sys 0m9.011s 00:26:24.317 09:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:26:24.317 09:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:24.317 ************************************ 00:26:24.317 END TEST nvmf_perf 00:26:24.317 ************************************ 00:26:24.317 09:47:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:24.317 09:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:26:24.317 09:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:26:24.317 09:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.577 ************************************ 00:26:24.578 START TEST nvmf_fio_host 00:26:24.578 ************************************ 00:26:24.578 09:47:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:24.578 * Looking for test storage... 00:26:24.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1626 -- # lcov --version 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:26:24.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.578 --rc genhtml_branch_coverage=1 00:26:24.578 --rc genhtml_function_coverage=1 00:26:24.578 --rc genhtml_legend=1 00:26:24.578 --rc geninfo_all_blocks=1 00:26:24.578 --rc geninfo_unexecuted_blocks=1 00:26:24.578 00:26:24.578 ' 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:26:24.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.578 --rc genhtml_branch_coverage=1 00:26:24.578 --rc genhtml_function_coverage=1 00:26:24.578 --rc genhtml_legend=1 00:26:24.578 --rc geninfo_all_blocks=1 00:26:24.578 --rc geninfo_unexecuted_blocks=1 00:26:24.578 00:26:24.578 ' 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:26:24.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.578 --rc genhtml_branch_coverage=1 00:26:24.578 --rc genhtml_function_coverage=1 00:26:24.578 --rc genhtml_legend=1 00:26:24.578 --rc geninfo_all_blocks=1 00:26:24.578 --rc geninfo_unexecuted_blocks=1 00:26:24.578 00:26:24.578 ' 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:26:24.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.578 --rc genhtml_branch_coverage=1 00:26:24.578 --rc genhtml_function_coverage=1 00:26:24.578 --rc genhtml_legend=1 00:26:24.578 --rc geninfo_all_blocks=1 00:26:24.578 --rc geninfo_unexecuted_blocks=1 00:26:24.578 00:26:24.578 ' 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:24.578 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.839 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:24.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:24.840 09:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.982 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:32.983 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:32.983 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:32.983 Found net devices under 0000:31:00.0: cvl_0_0 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:32.983 Found net devices under 0000:31:00.1: cvl_0_1 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:32.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:26:32.983 00:26:32.983 --- 10.0.0.2 ping statistics --- 00:26:32.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.983 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:26:32.983 00:26:32.983 --- 10.0.0.1 ping statistics --- 00:26:32.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.983 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3467511 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3467511 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # '[' -z 3467511 ']' 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local max_retries=100 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@843 -- # xtrace_disable 00:26:32.983 09:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.983 [2024-10-07 09:47:32.044103] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:26:32.983 [2024-10-07 09:47:32.044169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.983 [2024-10-07 09:47:32.114823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.983 [2024-10-07 09:47:32.200999] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.983 [2024-10-07 09:47:32.201052] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.983 [2024-10-07 09:47:32.201060] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.983 [2024-10-07 09:47:32.201067] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.983 [2024-10-07 09:47:32.201072] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.983 [2024-10-07 09:47:32.202740] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.983 [2024-10-07 09:47:32.202945] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.983 [2024-10-07 09:47:32.203168] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.983 [2024-10-07 09:47:32.203170] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.983 09:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:26:32.983 09:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@867 -- # return 0 00:26:32.983 09:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:32.983 [2024-10-07 09:47:32.488759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.984 09:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:32.984 09:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@733 -- # xtrace_disable 00:26:32.984 09:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.984 09:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:33.280 Malloc1 00:26:33.280 09:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:33.542 09:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:33.542 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.803 [2024-10-07 09:47:33.341829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.803 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1325 -- # local fio_dir=/usr/src/fio 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1327 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1327 -- # local sanitizers 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1328 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1329 -- # shift 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local asan_lib= 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # grep libasan 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # asan_lib= 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # grep libclang_rt.asan 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # asan_lib= 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:34.065 09:47:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:34.326 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:34.326 fio-3.35 00:26:34.326 Starting 1 thread 00:26:36.873 00:26:36.873 test: (groupid=0, jobs=1): err= 0: pid=3468042: Mon Oct 7 09:47:36 2024 00:26:36.873 read: IOPS=12.4k, BW=48.3MiB/s (50.6MB/s)(96.8MiB/2005msec) 00:26:36.873 slat (usec): min=2, max=257, avg= 2.17, stdev= 2.38 00:26:36.873 clat (usec): min=3353, max=9089, avg=5694.02, stdev=1079.92 00:26:36.873 lat (usec): min=3393, max=9091, avg=5696.19, stdev=1079.97 00:26:36.873 clat percentiles (usec): 00:26:36.873 | 1.00th=[ 4424], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4948], 00:26:36.873 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5276], 60.00th=[ 5407], 00:26:36.873 | 70.00th=[ 5604], 80.00th=[ 6849], 90.00th=[ 7635], 95.00th=[ 8029], 00:26:36.873 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8848], 99.95th=[ 8979], 00:26:36.873 | 99.99th=[ 8979] 00:26:36.873 bw ( KiB/s): min=35888, max=56104, per=100.00%, avg=49476.00, stdev=9309.58, samples=4 00:26:36.873 iops : min= 8972, max=14026, avg=12369.00, stdev=2327.40, samples=4 00:26:36.873 write: IOPS=12.4k, BW=48.3MiB/s (50.6MB/s)(96.8MiB/2005msec); 0 zone resets 00:26:36.873 slat (usec): min=2, max=255, avg= 2.26, stdev= 1.78 00:26:36.873 clat (usec): min=2632, max=8541, avg=4597.41, stdev=877.54 00:26:36.873 lat (usec): min=2648, max=8543, avg=4599.67, stdev=877.63 00:26:36.873 clat percentiles (usec): 00:26:36.873 | 1.00th=[ 3523], 5.00th=[ 3720], 10.00th=[ 3851], 20.00th=[ 3982], 00:26:36.873 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:26:36.873 | 70.00th=[ 4490], 80.00th=[ 5604], 90.00th=[ 6194], 95.00th=[ 6456], 00:26:36.873 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7898], 00:26:36.873 | 99.99th=[ 8225] 00:26:36.873 bw ( KiB/s): min=36744, max=55384, per=99.93%, avg=49378.00, stdev=8787.05, samples=4 00:26:36.873 iops : min= 9186, max=13846, avg=12344.50, stdev=2196.76, samples=4 00:26:36.873 lat (msec) : 4=10.74%, 10=89.26% 00:26:36.873 cpu : usr=69.51%, sys=28.99%, ctx=24, majf=0, minf=18 00:26:36.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:36.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:36.873 issued rwts: total=24786,24768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:36.873 00:26:36.873 Run status group 0 (all jobs): 00:26:36.873 READ: bw=48.3MiB/s (50.6MB/s), 48.3MiB/s-48.3MiB/s (50.6MB/s-50.6MB/s), io=96.8MiB (102MB), run=2005-2005msec 00:26:36.873 WRITE: bw=48.3MiB/s (50.6MB/s), 48.3MiB/s-48.3MiB/s (50.6MB/s-50.6MB/s), io=96.8MiB (101MB), run=2005-2005msec 00:26:36.873 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:36.873 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1325 -- # local fio_dir=/usr/src/fio 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1327 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1327 -- # local sanitizers 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1328 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1329 -- # shift 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local asan_lib= 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # grep libasan 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # asan_lib= 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # grep libclang_rt.asan 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # asan_lib= 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:36.874 09:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:37.148 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:37.148 fio-3.35 00:26:37.148 Starting 1 thread 00:26:39.696 00:26:39.696 test: (groupid=0, jobs=1): err= 0: pid=3468748: Mon Oct 7 09:47:38 2024 00:26:39.696 read: IOPS=9521, BW=149MiB/s (156MB/s)(298MiB/2005msec) 00:26:39.696 slat (usec): min=3, max=114, avg= 3.60, stdev= 1.66 00:26:39.696 clat (usec): min=1274, max=16481, avg=8220.64, stdev=1956.24 00:26:39.696 lat (usec): min=1277, max=16499, avg=8224.24, stdev=1956.44 00:26:39.696 clat percentiles (usec): 00:26:39.696 | 1.00th=[ 4359], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 6390], 00:26:39.696 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8717], 00:26:39.696 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:26:39.696 | 99.00th=[12780], 99.50th=[13435], 99.90th=[15270], 99.95th=[15401], 00:26:39.696 | 99.99th=[16188] 00:26:39.696 bw ( KiB/s): min=69312, max=85568, per=49.58%, avg=75528.00, stdev=7334.28, samples=4 00:26:39.696 iops : min= 4332, max= 5348, avg=4720.50, stdev=458.39, samples=4 00:26:39.696 write: IOPS=5579, BW=87.2MiB/s (91.4MB/s)(154MiB/1770msec); 0 zone resets 00:26:39.696 slat (usec): min=39, max=448, avg=41.18, stdev= 9.37 00:26:39.696 clat (usec): min=1862, max=16975, avg=9119.04, stdev=1470.53 00:26:39.696 lat (usec): min=1902, max=17112, avg=9160.22, stdev=1473.95 00:26:39.696 clat percentiles (usec): 00:26:39.696 | 1.00th=[ 5866], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 7963], 00:26:39.696 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:26:39.696 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10814], 95.00th=[11338], 00:26:39.696 | 99.00th=[12911], 99.50th=[15401], 99.90th=[16581], 99.95th=[16712], 00:26:39.696 | 99.99th=[16909] 00:26:39.696 bw ( KiB/s): min=71936, max=89088, per=88.04%, avg=78600.00, stdev=7644.65, samples=4 00:26:39.696 iops : min= 4496, max= 5568, avg=4912.50, stdev=477.79, samples=4 00:26:39.696 lat (msec) : 2=0.06%, 4=0.38%, 10=76.53%, 20=23.04% 00:26:39.696 cpu : usr=84.78%, sys=13.82%, ctx=16, majf=0, minf=30 00:26:39.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:39.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:39.696 issued rwts: total=19090,9876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:39.696 00:26:39.696 Run status group 0 (all jobs): 00:26:39.696 READ: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=298MiB (313MB), run=2005-2005msec 00:26:39.696 WRITE: bw=87.2MiB/s (91.4MB/s), 87.2MiB/s-87.2MiB/s (91.4MB/s-91.4MB/s), io=154MiB (162MB), run=1770-1770msec 00:26:39.696 09:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.696 rmmod nvme_tcp 00:26:39.696 rmmod nvme_fabrics 00:26:39.696 rmmod nvme_keyring 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 3467511 ']' 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 3467511 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' -z 3467511 ']' 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # kill -0 3467511 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # uname 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3467511 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3467511' 00:26:39.696 killing process with pid 3467511 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # kill 3467511 00:26:39.696 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@977 -- # wait 3467511 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.959 09:47:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.875 09:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.875 00:26:41.875 real 0m17.531s 00:26:41.875 user 0m58.146s 00:26:41.875 sys 0m7.964s 00:26:41.875 09:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # xtrace_disable 00:26:41.875 09:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.875 ************************************ 00:26:41.875 END TEST nvmf_fio_host 00:26:41.875 ************************************ 00:26:42.136 09:47:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:42.136 09:47:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:26:42.136 09:47:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:26:42.136 09:47:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.136 ************************************ 00:26:42.136 START TEST nvmf_failover 00:26:42.136 ************************************ 00:26:42.136 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:42.136 * Looking for test storage... 00:26:42.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:42.136 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:26:42.136 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1626 -- # lcov --version 00:26:42.136 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.398 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:26:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.399 --rc genhtml_branch_coverage=1 00:26:42.399 --rc genhtml_function_coverage=1 00:26:42.399 --rc genhtml_legend=1 00:26:42.399 --rc geninfo_all_blocks=1 00:26:42.399 --rc geninfo_unexecuted_blocks=1 00:26:42.399 00:26:42.399 ' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:26:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.399 --rc genhtml_branch_coverage=1 00:26:42.399 --rc genhtml_function_coverage=1 00:26:42.399 --rc genhtml_legend=1 00:26:42.399 --rc geninfo_all_blocks=1 00:26:42.399 --rc geninfo_unexecuted_blocks=1 00:26:42.399 00:26:42.399 ' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:26:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.399 --rc genhtml_branch_coverage=1 00:26:42.399 --rc genhtml_function_coverage=1 00:26:42.399 --rc genhtml_legend=1 00:26:42.399 --rc geninfo_all_blocks=1 00:26:42.399 --rc geninfo_unexecuted_blocks=1 00:26:42.399 00:26:42.399 ' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:26:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.399 --rc genhtml_branch_coverage=1 00:26:42.399 --rc genhtml_function_coverage=1 00:26:42.399 --rc genhtml_legend=1 00:26:42.399 --rc geninfo_all_blocks=1 00:26:42.399 --rc geninfo_unexecuted_blocks=1 00:26:42.399 00:26:42.399 ' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:42.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.399 09:47:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:50.549 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.549 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:50.549 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:50.549 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:50.549 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:50.549 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:50.549 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:50.549 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:50.549 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:50.550 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:50.550 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:50.550 Found net devices under 0000:31:00.0: cvl_0_0 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:50.550 Found net devices under 0000:31:00.1: cvl_0_1 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.550 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:50.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:26:50.551 00:26:50.551 --- 10.0.0.2 ping statistics --- 00:26:50.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.551 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:26:50.551 00:26:50.551 --- 10.0.0.1 ping statistics --- 00:26:50.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.551 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=3473590 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 3473590 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # '[' -z 3473590 ']' 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local max_retries=100 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@843 -- # xtrace_disable 00:26:50.551 09:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:50.551 [2024-10-07 09:47:49.678638] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:26:50.551 [2024-10-07 09:47:49.678707] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.551 [2024-10-07 09:47:49.768225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:50.551 [2024-10-07 09:47:49.860819] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.551 [2024-10-07 09:47:49.860881] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.551 [2024-10-07 09:47:49.860890] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.551 [2024-10-07 09:47:49.860897] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.551 [2024-10-07 09:47:49.860903] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.551 [2024-10-07 09:47:49.862246] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.551 [2024-10-07 09:47:49.862410] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.551 [2024-10-07 09:47:49.862411] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.125 09:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:26:51.125 09:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@867 -- # return 0 00:26:51.125 09:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:51.125 09:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@733 -- # xtrace_disable 00:26:51.125 09:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:51.125 09:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.125 09:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:51.125 [2024-10-07 09:47:50.713443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.125 09:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:51.387 Malloc0 00:26:51.387 09:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:51.648 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.909 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.909 [2024-10-07 09:47:51.531986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.909 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:52.171 [2024-10-07 09:47:51.732651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:52.171 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:52.433 [2024-10-07 09:47:51.925357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:52.433 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:52.433 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3473969 00:26:52.433 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:52.433 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3473969 /var/tmp/bdevperf.sock 00:26:52.433 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # '[' -z 3473969 ']' 00:26:52.433 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:52.433 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local max_retries=100 00:26:52.433 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:52.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:52.433 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@843 -- # xtrace_disable 00:26:52.433 09:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:53.379 09:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:26:53.379 09:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@867 -- # return 0 00:26:53.379 09:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:53.640 NVMe0n1 00:26:53.640 09:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:53.901 00:26:53.901 09:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:53.901 09:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3474304 00:26:53.901 09:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:54.947 09:47:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.947 [2024-10-07 09:47:54.525488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 [2024-10-07 09:47:54.525645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19175f0 is same with the state(6) to be set 00:26:54.947 09:47:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:58.254 09:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:58.514 00:26:58.514 09:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:58.514 [2024-10-07 09:47:58.136234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 [2024-10-07 09:47:58.136416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918460 is same with the state(6) to be set 00:26:58.514 09:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:01.811 09:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.811 [2024-10-07 09:48:01.325364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.811 09:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:02.755 09:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:03.025 [2024-10-07 09:48:02.514847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.514999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.025 [2024-10-07 09:48:02.515178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 [2024-10-07 09:48:02.515330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19194e0 is same with the state(6) to be set 00:27:03.026 09:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3474304 00:27:09.619 { 00:27:09.619 "results": [ 00:27:09.619 { 00:27:09.619 "job": "NVMe0n1", 00:27:09.619 "core_mask": "0x1", 00:27:09.619 "workload": "verify", 00:27:09.619 "status": "finished", 00:27:09.619 "verify_range": { 00:27:09.619 "start": 0, 00:27:09.619 "length": 16384 00:27:09.619 }, 00:27:09.619 "queue_depth": 128, 00:27:09.619 "io_size": 4096, 00:27:09.619 "runtime": 15.006192, 00:27:09.619 "iops": 12449.794058346048, 00:27:09.619 "mibps": 48.63200804041425, 00:27:09.619 "io_failed": 6709, 00:27:09.619 "io_timeout": 0, 00:27:09.619 "avg_latency_us": 9903.653754553488, 00:27:09.619 "min_latency_us": 373.76, 00:27:09.619 "max_latency_us": 22063.786666666667 00:27:09.619 } 00:27:09.619 ], 00:27:09.619 "core_count": 1 00:27:09.619 } 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3473969 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' -z 3473969 ']' 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # kill -0 3473969 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # uname 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3473969 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3473969' 00:27:09.619 killing process with pid 3473969 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # kill 3473969 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@977 -- # wait 3473969 00:27:09.619 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:09.619 [2024-10-07 09:47:52.005663] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:27:09.619 [2024-10-07 09:47:52.005743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473969 ] 00:27:09.619 [2024-10-07 09:47:52.091105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.619 [2024-10-07 09:47:52.170572] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.619 Running I/O for 15 seconds... 00:27:09.619 11205.00 IOPS, 43.77 MiB/s [2024-10-07 09:47:54.526854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.526888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.526905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.526914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.526924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.526932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.526941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.526949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.526959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.526967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.526976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.526983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.526993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.527001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.527010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.527018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.527027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.527034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.527043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.527051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.527061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.527069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.527083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.527091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.527100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.527108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.527117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.527125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.619 [2024-10-07 09:47:54.527134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.619 [2024-10-07 09:47:54.527141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.620 [2024-10-07 09:47:54.527756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.620 [2024-10-07 09:47:54.527837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.620 [2024-10-07 09:47:54.527845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.527854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.527861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.527871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.527878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.527887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.527894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.527903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.527911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.527921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.527928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.527938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.527945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.527954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.527963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.527972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.527979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.527989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.527996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.621 [2024-10-07 09:47:54.528413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.621 [2024-10-07 09:47:54.528523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.621 [2024-10-07 09:47:54.528531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.622 [2024-10-07 09:47:54.528833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.528987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.528997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.622 [2024-10-07 09:47:54.529004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.529025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.622 [2024-10-07 09:47:54.529033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97064 len:8 PRP1 0x0 PRP2 0x0 00:27:09.622 [2024-10-07 09:47:54.529041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.529051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.622 [2024-10-07 09:47:54.529057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.622 [2024-10-07 09:47:54.529063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97072 len:8 PRP1 0x0 PRP2 0x0 00:27:09.622 [2024-10-07 09:47:54.529073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.529080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.622 [2024-10-07 09:47:54.529086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.622 [2024-10-07 09:47:54.529092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97080 len:8 PRP1 0x0 PRP2 0x0 00:27:09.622 [2024-10-07 09:47:54.529099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.529107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.622 [2024-10-07 09:47:54.529113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.622 [2024-10-07 09:47:54.529120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97088 len:8 PRP1 0x0 PRP2 0x0 00:27:09.622 [2024-10-07 09:47:54.529128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.529136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.622 [2024-10-07 09:47:54.529141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.622 [2024-10-07 09:47:54.529147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97096 len:8 PRP1 0x0 PRP2 0x0 00:27:09.622 [2024-10-07 09:47:54.529155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.529163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.622 [2024-10-07 09:47:54.529168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.622 [2024-10-07 09:47:54.529175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97104 len:8 PRP1 0x0 PRP2 0x0 00:27:09.622 [2024-10-07 09:47:54.529183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.529217] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1004810 was disconnected and freed. reset controller. 00:27:09.622 [2024-10-07 09:47:54.529227] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:09.622 [2024-10-07 09:47:54.529247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.622 [2024-10-07 09:47:54.529255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.622 [2024-10-07 09:47:54.529263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.622 [2024-10-07 09:47:54.529271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:54.529279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.623 [2024-10-07 09:47:54.529287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:54.529295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.623 [2024-10-07 09:47:54.529303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:54.529310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.623 [2024-10-07 09:47:54.532880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.623 [2024-10-07 09:47:54.532907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe3fe0 (9): Bad file descriptor 00:27:09.623 [2024-10-07 09:47:54.572957] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.623 11030.50 IOPS, 43.09 MiB/s 11088.67 IOPS, 43.32 MiB/s 11364.50 IOPS, 44.39 MiB/s [2024-10-07 09:47:58.136709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.136990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.136998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.137006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.137014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.137021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.137029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.137036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.137045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.137051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.137060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.137065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.137071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.137076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.137084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.137089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.137096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.137102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.137108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.137114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.623 [2024-10-07 09:47:58.137121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.623 [2024-10-07 09:47:58.137126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.624 [2024-10-07 09:47:58.137139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.624 [2024-10-07 09:47:58.137151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.624 [2024-10-07 09:47:58.137164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.624 [2024-10-07 09:47:58.137177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.624 [2024-10-07 09:47:58.137626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.624 [2024-10-07 09:47:58.137633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.137980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.625 [2024-10-07 09:47:58.137992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.137999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.625 [2024-10-07 09:47:58.138004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.625 [2024-10-07 09:47:58.138017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.625 [2024-10-07 09:47:58.138030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.138042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.138055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.138068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.138081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.138093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.138105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.138118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.625 [2024-10-07 09:47:58.138130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.625 [2024-10-07 09:47:58.138137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.626 [2024-10-07 09:47:58.138270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:47:58.138283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:47:58.138295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:47:58.138308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:47:58.138320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:47:58.138332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:47:58.138345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:47:58.138358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006780 is same with the state(6) to be set 00:27:09.626 [2024-10-07 09:47:58.138371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.626 [2024-10-07 09:47:58.138376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.626 [2024-10-07 09:47:58.138381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50872 len:8 PRP1 0x0 PRP2 0x0 00:27:09.626 [2024-10-07 09:47:58.138386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138415] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1006780 was disconnected and freed. reset controller. 00:27:09.626 [2024-10-07 09:47:58.138423] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:09.626 [2024-10-07 09:47:58.138439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.626 [2024-10-07 09:47:58.138446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.626 [2024-10-07 09:47:58.138458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.626 [2024-10-07 09:47:58.138469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.626 [2024-10-07 09:47:58.138481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:47:58.138487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.626 [2024-10-07 09:47:58.140930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.626 [2024-10-07 09:47:58.140951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe3fe0 (9): Bad file descriptor 00:27:09.626 [2024-10-07 09:47:58.217500] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.626 11468.00 IOPS, 44.80 MiB/s 11714.17 IOPS, 45.76 MiB/s 11892.86 IOPS, 46.46 MiB/s 12048.25 IOPS, 47.06 MiB/s 12135.00 IOPS, 47.40 MiB/s [2024-10-07 09:48:02.516707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.626 [2024-10-07 09:48:02.516886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.626 [2024-10-07 09:48:02.516891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.516897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.516903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.516911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.516917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.516924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.516929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.516935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.516941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.516947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.516952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.516959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.516964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.516971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.516976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.516982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.516988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.516994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.516999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.517011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.517023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.517035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.517047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.517058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.517072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.517084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.517096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.627 [2024-10-07 09:48:02.517107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.627 [2024-10-07 09:48:02.517362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.627 [2024-10-07 09:48:02.517367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.628 [2024-10-07 09:48:02.517471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.628 [2024-10-07 09:48:02.517483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.628 [2024-10-07 09:48:02.517494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.628 [2024-10-07 09:48:02.517784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.628 [2024-10-07 09:48:02.517791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.629 [2024-10-07 09:48:02.517796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.629 [2024-10-07 09:48:02.517808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.629 [2024-10-07 09:48:02.517824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.629 [2024-10-07 09:48:02.517835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.629 [2024-10-07 09:48:02.517847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.629 [2024-10-07 09:48:02.517858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.629 [2024-10-07 09:48:02.517870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.629 [2024-10-07 09:48:02.517882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.517907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5640 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.517913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.517925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.517930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5648 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.517935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.517944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.517949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5656 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.517954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.517964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.517968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.517973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.517984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.517988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5672 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.517993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.517999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5680 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5688 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5704 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5712 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5720 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5736 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5744 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5752 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5768 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5776 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5784 len:8 PRP1 0x0 PRP2 0x0 00:27:09.629 [2024-10-07 09:48:02.518261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.629 [2024-10-07 09:48:02.518266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.629 [2024-10-07 09:48:02.518271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.629 [2024-10-07 09:48:02.518275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.518282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.518288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.518292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.518297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5800 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.518306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.518311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.518315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.518320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5808 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.518325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.531549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.531557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5816 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.531565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.531576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.531582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.531588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.531600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.531605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5832 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.531610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.531626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.531631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5840 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.531636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.531646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.531650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5848 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.531655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.531665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.531674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.531679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.531689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.531694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5864 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.531699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.531709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.531714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5872 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.531718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.531727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.531732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5880 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.531737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.630 [2024-10-07 09:48:02.531746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.630 [2024-10-07 09:48:02.531751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:8 PRP1 0x0 PRP2 0x0 00:27:09.630 [2024-10-07 09:48:02.531756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531790] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1013590 was disconnected and freed. reset controller. 00:27:09.630 [2024-10-07 09:48:02.531799] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:09.630 [2024-10-07 09:48:02.531822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.630 [2024-10-07 09:48:02.531829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.630 [2024-10-07 09:48:02.531842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.630 [2024-10-07 09:48:02.531854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.630 [2024-10-07 09:48:02.531865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.630 [2024-10-07 09:48:02.531870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.630 [2024-10-07 09:48:02.531903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe3fe0 (9): Bad file descriptor 00:27:09.630 [2024-10-07 09:48:02.534638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.630 [2024-10-07 09:48:02.561275] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.630 12170.40 IOPS, 47.54 MiB/s 12256.18 IOPS, 47.88 MiB/s 12306.92 IOPS, 48.07 MiB/s 12360.15 IOPS, 48.28 MiB/s 12412.21 IOPS, 48.49 MiB/s 12449.00 IOPS, 48.63 MiB/s 00:27:09.630 Latency(us) 00:27:09.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.630 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:09.630 Verification LBA range: start 0x0 length 0x4000 00:27:09.630 NVMe0n1 : 15.01 12449.79 48.63 447.08 0.00 9903.65 373.76 22063.79 00:27:09.630 =================================================================================================================== 00:27:09.630 Total : 12449.79 48.63 447.08 0.00 9903.65 373.76 22063.79 00:27:09.630 Received shutdown signal, test time was about 15.000000 seconds 00:27:09.630 00:27:09.630 Latency(us) 00:27:09.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.630 =================================================================================================================== 00:27:09.630 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3477555 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3477555 /var/tmp/bdevperf.sock 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # '[' -z 3477555 ']' 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local max_retries=100 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@843 -- # xtrace_disable 00:27:09.630 09:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:10.203 09:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:27:10.203 09:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@867 -- # return 0 00:27:10.203 09:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:10.203 [2024-10-07 09:48:09.721651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:10.203 09:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:10.465 [2024-10-07 09:48:09.898101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:10.465 09:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:10.727 NVMe0n1 00:27:10.727 09:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:10.988 00:27:10.989 09:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:11.250 00:27:11.250 09:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:11.250 09:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:11.510 09:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:11.771 09:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:15.073 09:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:15.073 09:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:15.074 09:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:15.074 09:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3478888 00:27:15.074 09:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3478888 00:27:16.016 { 00:27:16.016 "results": [ 00:27:16.016 { 00:27:16.016 "job": "NVMe0n1", 00:27:16.016 "core_mask": "0x1", 00:27:16.016 "workload": "verify", 00:27:16.016 "status": "finished", 00:27:16.016 "verify_range": { 00:27:16.016 "start": 0, 00:27:16.016 "length": 16384 00:27:16.016 }, 00:27:16.016 "queue_depth": 128, 00:27:16.016 "io_size": 4096, 00:27:16.016 "runtime": 1.006197, 00:27:16.016 "iops": 13048.140672254041, 00:27:16.016 "mibps": 50.96929950099235, 00:27:16.016 "io_failed": 0, 00:27:16.016 "io_timeout": 0, 00:27:16.016 "avg_latency_us": 9775.253178967681, 00:27:16.016 "min_latency_us": 2129.92, 00:27:16.016 "max_latency_us": 8410.453333333333 00:27:16.016 } 00:27:16.016 ], 00:27:16.016 "core_count": 1 00:27:16.016 } 00:27:16.016 09:48:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:16.016 [2024-10-07 09:48:08.773913] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:27:16.016 [2024-10-07 09:48:08.773973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477555 ] 00:27:16.016 [2024-10-07 09:48:08.852191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.016 [2024-10-07 09:48:08.906685] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.016 [2024-10-07 09:48:11.208518] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:16.016 [2024-10-07 09:48:11.208560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.016 [2024-10-07 09:48:11.208569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.016 [2024-10-07 09:48:11.208576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.016 [2024-10-07 09:48:11.208582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.016 [2024-10-07 09:48:11.208588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.016 [2024-10-07 09:48:11.208593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.016 [2024-10-07 09:48:11.208598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.016 [2024-10-07 09:48:11.208604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.016 [2024-10-07 09:48:11.208612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.016 [2024-10-07 09:48:11.208635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.016 [2024-10-07 09:48:11.208646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a1fe0 (9): Bad file descriptor 00:27:16.016 [2024-10-07 09:48:11.229026] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:16.016 Running I/O for 1 seconds... 00:27:16.016 13001.00 IOPS, 50.79 MiB/s 00:27:16.016 Latency(us) 00:27:16.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.016 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:16.016 Verification LBA range: start 0x0 length 0x4000 00:27:16.016 NVMe0n1 : 1.01 13048.14 50.97 0.00 0.00 9775.25 2129.92 8410.45 00:27:16.016 =================================================================================================================== 00:27:16.016 Total : 13048.14 50.97 0.00 0.00 9775.25 2129.92 8410.45 00:27:16.016 09:48:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:16.016 09:48:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:16.276 09:48:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:16.276 09:48:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:16.276 09:48:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:16.537 09:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:16.796 09:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3477555 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' -z 3477555 ']' 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # kill -0 3477555 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # uname 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3477555 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3477555' 00:27:20.095 killing process with pid 3477555 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # kill 3477555 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@977 -- # wait 3477555 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:20.095 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:20.355 rmmod nvme_tcp 00:27:20.355 rmmod nvme_fabrics 00:27:20.355 rmmod nvme_keyring 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 3473590 ']' 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 3473590 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' -z 3473590 ']' 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # kill -0 3473590 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # uname 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3473590 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3473590' 00:27:20.355 killing process with pid 3473590 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # kill 3473590 00:27:20.355 09:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@977 -- # wait 3473590 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.615 09:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.160 00:27:23.160 real 0m40.599s 00:27:23.160 user 2m3.763s 00:27:23.160 sys 0m9.057s 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # xtrace_disable 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:23.160 ************************************ 00:27:23.160 END TEST nvmf_failover 00:27:23.160 ************************************ 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.160 ************************************ 00:27:23.160 START TEST nvmf_host_discovery 00:27:23.160 ************************************ 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:23.160 * Looking for test storage... 00:27:23.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1626 -- # lcov --version 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:27:23.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.160 --rc genhtml_branch_coverage=1 00:27:23.160 --rc genhtml_function_coverage=1 00:27:23.160 --rc genhtml_legend=1 00:27:23.160 --rc geninfo_all_blocks=1 00:27:23.160 --rc geninfo_unexecuted_blocks=1 00:27:23.160 00:27:23.160 ' 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:27:23.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.160 --rc genhtml_branch_coverage=1 00:27:23.160 --rc genhtml_function_coverage=1 00:27:23.160 --rc genhtml_legend=1 00:27:23.160 --rc geninfo_all_blocks=1 00:27:23.160 --rc geninfo_unexecuted_blocks=1 00:27:23.160 00:27:23.160 ' 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:27:23.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.160 --rc genhtml_branch_coverage=1 00:27:23.160 --rc genhtml_function_coverage=1 00:27:23.160 --rc genhtml_legend=1 00:27:23.160 --rc geninfo_all_blocks=1 00:27:23.160 --rc geninfo_unexecuted_blocks=1 00:27:23.160 00:27:23.160 ' 00:27:23.160 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:27:23.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.160 --rc genhtml_branch_coverage=1 00:27:23.161 --rc genhtml_function_coverage=1 00:27:23.161 --rc genhtml_legend=1 00:27:23.161 --rc geninfo_all_blocks=1 00:27:23.161 --rc geninfo_unexecuted_blocks=1 00:27:23.161 00:27:23.161 ' 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:23.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:27:23.161 09:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:31.301 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:31.301 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:31.301 Found net devices under 0000:31:00.0: cvl_0_0 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.301 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:31.301 Found net devices under 0000:31:00.1: cvl_0_1 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.302 09:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:31.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:27:31.302 00:27:31.302 --- 10.0.0.2 ping statistics --- 00:27:31.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.302 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:27:31.302 00:27:31.302 --- 10.0.0.1 ping statistics --- 00:27:31.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.302 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=3484300 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 3484300 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # '[' -z 3484300 ']' 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local max_retries=100 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@843 -- # xtrace_disable 00:27:31.302 09:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.302 [2024-10-07 09:48:30.383147] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:27:31.302 [2024-10-07 09:48:30.383221] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.302 [2024-10-07 09:48:30.475789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.302 [2024-10-07 09:48:30.568046] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.302 [2024-10-07 09:48:30.568106] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.302 [2024-10-07 09:48:30.568115] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.302 [2024-10-07 09:48:30.568123] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.302 [2024-10-07 09:48:30.568135] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.302 [2024-10-07 09:48:30.568963] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.564 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:27:31.564 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@867 -- # return 0 00:27:31.564 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:31.564 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@733 -- # xtrace_disable 00:27:31.564 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.825 [2024-10-07 09:48:31.258229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.825 [2024-10-07 09:48:31.270543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.825 null0 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.825 null1 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3484464 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3484464 /tmp/host.sock 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # '[' -z 3484464 ']' 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local rpc_addr=/tmp/host.sock 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local max_retries=100 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:31.825 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@843 -- # xtrace_disable 00:27:31.825 09:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.825 [2024-10-07 09:48:31.368516] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:27:31.825 [2024-10-07 09:48:31.368581] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484464 ] 00:27:31.825 [2024-10-07 09:48:31.450901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.086 [2024-10-07 09:48:31.549545] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@867 -- # return 0 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.660 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 [2024-10-07 09:48:32.541832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:32.923 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_notification_count 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( notification_count == expected_count )) 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_subsystem_names 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:33.186 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:33.187 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:33.187 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:33.187 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ '' == \n\v\m\e\0 ]] 00:27:33.187 09:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # sleep 1 00:27:33.759 [2024-10-07 09:48:33.257810] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:33.759 [2024-10-07 09:48:33.257834] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:33.759 [2024-10-07 09:48:33.257848] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:33.759 [2024-10-07 09:48:33.346125] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:33.759 [2024-10-07 09:48:33.407407] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:33.759 [2024-10-07 09:48:33.407428] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_subsystem_names 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_bdev_list 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_subsystem_paths nvme0 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ 4420 == \4\4\2\0 ]] 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_notification_count 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( notification_count == expected_count )) 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.330 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_bdev_list 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.331 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_notification_count 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.592 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( notification_count == expected_count )) 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.853 [2024-10-07 09:48:34.262114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:34.853 [2024-10-07 09:48:34.262653] bdev_nvme.c:7146:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:34.853 [2024-10-07 09:48:34.262682] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_subsystem_names 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_bdev_list 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_subsystem_paths nvme0 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.853 [2024-10-07 09:48:34.392329] bdev_nvme.c:7088:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:34.853 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # sleep 1 00:27:34.853 [2024-10-07 09:48:34.492392] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:34.853 [2024-10-07 09:48:34.492413] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:34.853 [2024-10-07 09:48:34.492419] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:35.795 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:35.795 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:35.795 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_subsystem_paths nvme0 00:27:35.795 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:35.795 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:35.795 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:35.795 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:35.795 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.795 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:35.795 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_notification_count 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( notification_count == expected_count )) 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.058 [2024-10-07 09:48:35.538189] bdev_nvme.c:7146:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:36.058 [2024-10-07 09:48:35.538213] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:36.058 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_subsystem_names 00:27:36.059 [2024-10-07 09:48:35.547452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.059 [2024-10-07 09:48:35.547472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.059 [2024-10-07 09:48:35.547482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.059 [2024-10-07 09:48:35.547493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.059 [2024-10-07 09:48:35.547502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.059 [2024-10-07 09:48:35.547509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.059 [2024-10-07 09:48:35.547517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.059 [2024-10-07 09:48:35.547525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.059 [2024-10-07 09:48:35.547532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207ffd0 is same with the state(6) to be set 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.059 [2024-10-07 09:48:35.557466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207ffd0 (9): Bad file descriptor 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.059 [2024-10-07 09:48:35.567507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:36.059 [2024-10-07 09:48:35.567842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.059 [2024-10-07 09:48:35.567859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207ffd0 with addr=10.0.0.2, port=4420 00:27:36.059 [2024-10-07 09:48:35.567867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207ffd0 is same with the state(6) to be set 00:27:36.059 [2024-10-07 09:48:35.567879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207ffd0 (9): Bad file descriptor 00:27:36.059 [2024-10-07 09:48:35.567890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:36.059 [2024-10-07 09:48:35.567897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:36.059 [2024-10-07 09:48:35.567905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:36.059 [2024-10-07 09:48:35.567917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.059 [2024-10-07 09:48:35.577564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:36.059 [2024-10-07 09:48:35.577919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.059 [2024-10-07 09:48:35.577933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207ffd0 with addr=10.0.0.2, port=4420 00:27:36.059 [2024-10-07 09:48:35.577940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207ffd0 is same with the state(6) to be set 00:27:36.059 [2024-10-07 09:48:35.577951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207ffd0 (9): Bad file descriptor 00:27:36.059 [2024-10-07 09:48:35.577962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:36.059 [2024-10-07 09:48:35.577968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:36.059 [2024-10-07 09:48:35.577976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:36.059 [2024-10-07 09:48:35.577986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.059 [2024-10-07 09:48:35.587622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:36.059 [2024-10-07 09:48:35.587988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.059 [2024-10-07 09:48:35.588002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207ffd0 with addr=10.0.0.2, port=4420 00:27:36.059 [2024-10-07 09:48:35.588010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207ffd0 is same with the state(6) to be set 00:27:36.059 [2024-10-07 09:48:35.588021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207ffd0 (9): Bad file descriptor 00:27:36.059 [2024-10-07 09:48:35.588031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:36.059 [2024-10-07 09:48:35.588038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:36.059 [2024-10-07 09:48:35.588046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:36.059 [2024-10-07 09:48:35.588057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_bdev_list 00:27:36.059 [2024-10-07 09:48:35.597678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.059 [2024-10-07 09:48:35.598901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.059 [2024-10-07 09:48:35.598926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207ffd0 with addr=10.0.0.2, port=4420 00:27:36.059 [2024-10-07 09:48:35.598936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207ffd0 is same with the state(6) to be set 00:27:36.059 [2024-10-07 09:48:35.598954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207ffd0 (9): Bad file descriptor 00:27:36.059 [2024-10-07 09:48:35.598987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:36.059 [2024-10-07 09:48:35.598997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:36.059 [2024-10-07 09:48:35.599008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:36.059 [2024-10-07 09:48:35.599023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:36.059 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.059 [2024-10-07 09:48:35.607728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:36.059 [2024-10-07 09:48:35.608048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.059 [2024-10-07 09:48:35.608058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207ffd0 with addr=10.0.0.2, port=4420 00:27:36.059 [2024-10-07 09:48:35.608064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207ffd0 is same with the state(6) to be set 00:27:36.059 [2024-10-07 09:48:35.608072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207ffd0 (9): Bad file descriptor 00:27:36.059 [2024-10-07 09:48:35.608080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:36.059 [2024-10-07 09:48:35.608084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:36.059 [2024-10-07 09:48:35.608090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:36.059 [2024-10-07 09:48:35.608098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.059 [2024-10-07 09:48:35.617777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:36.059 [2024-10-07 09:48:35.618122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.059 [2024-10-07 09:48:35.618131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207ffd0 with addr=10.0.0.2, port=4420 00:27:36.059 [2024-10-07 09:48:35.618137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207ffd0 is same with the state(6) to be set 00:27:36.059 [2024-10-07 09:48:35.618144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207ffd0 (9): Bad file descriptor 00:27:36.059 [2024-10-07 09:48:35.618152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:36.059 [2024-10-07 09:48:35.618156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:36.059 [2024-10-07 09:48:35.618161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:36.059 [2024-10-07 09:48:35.618169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.059 [2024-10-07 09:48:35.626623] bdev_nvme.c:6951:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:36.059 [2024-10-07 09:48:35.626637] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_subsystem_paths nvme0 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ 4421 == \4\4\2\1 ]] 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_notification_count 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.060 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( notification_count == expected_count )) 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_subsystem_names 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ '' == '' ]] 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_bdev_list 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # [[ '' == '' ]] 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local max=10 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( max-- )) 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # get_notification_count 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:36.320 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:36.321 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.321 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.321 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.321 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:36.321 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:36.321 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( notification_count == expected_count )) 00:27:36.321 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # return 0 00:27:36.321 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:36.321 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.321 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.710 [2024-10-07 09:48:36.957790] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:37.710 [2024-10-07 09:48:36.957804] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:37.710 [2024-10-07 09:48:36.957813] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:37.710 [2024-10-07 09:48:37.046054] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:37.710 [2024-10-07 09:48:37.359328] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:37.710 [2024-10-07 09:48:37.359352] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # local es=0 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@656 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:37.710 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.972 request: 00:27:37.972 { 00:27:37.972 "name": "nvme", 00:27:37.972 "trtype": "tcp", 00:27:37.972 "traddr": "10.0.0.2", 00:27:37.972 "adrfam": "ipv4", 00:27:37.972 "trsvcid": "8009", 00:27:37.972 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:37.972 "wait_for_attach": true, 00:27:37.972 "method": "bdev_nvme_start_discovery", 00:27:37.972 "req_id": 1 00:27:37.972 } 00:27:37.972 Got JSON-RPC error response 00:27:37.972 response: 00:27:37.972 { 00:27:37.972 "code": -17, 00:27:37.972 "message": "File exists" 00:27:37.972 } 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@656 -- # es=1 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # local es=0 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@656 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.972 request: 00:27:37.972 { 00:27:37.972 "name": "nvme_second", 00:27:37.972 "trtype": "tcp", 00:27:37.972 "traddr": "10.0.0.2", 00:27:37.972 "adrfam": "ipv4", 00:27:37.972 "trsvcid": "8009", 00:27:37.972 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:37.972 "wait_for_attach": true, 00:27:37.972 "method": "bdev_nvme_start_discovery", 00:27:37.972 "req_id": 1 00:27:37.972 } 00:27:37.972 Got JSON-RPC error response 00:27:37.972 response: 00:27:37.972 { 00:27:37.972 "code": -17, 00:27:37.972 "message": "File exists" 00:27:37.972 } 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@656 -- # es=1 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # local es=0 00:27:37.972 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:37.973 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:27:37.973 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:27:37.973 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:27:37.973 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:27:37.973 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@656 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:37.973 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:37.973 09:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.359 [2024-10-07 09:48:38.614596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-10-07 09:48:38.614623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2099ca0 with addr=10.0.0.2, port=8010 00:27:39.359 [2024-10-07 09:48:38.614637] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:39.359 [2024-10-07 09:48:38.614642] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:39.359 [2024-10-07 09:48:38.614647] bdev_nvme.c:7226:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:40.300 [2024-10-07 09:48:39.617067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.300 [2024-10-07 09:48:39.617086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2099ca0 with addr=10.0.0.2, port=8010 00:27:40.300 [2024-10-07 09:48:39.617094] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:40.300 [2024-10-07 09:48:39.617099] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:40.300 [2024-10-07 09:48:39.617104] bdev_nvme.c:7226:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:41.242 [2024-10-07 09:48:40.619098] bdev_nvme.c:7207:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:41.242 request: 00:27:41.242 { 00:27:41.242 "name": "nvme_second", 00:27:41.242 "trtype": "tcp", 00:27:41.242 "traddr": "10.0.0.2", 00:27:41.242 "adrfam": "ipv4", 00:27:41.242 "trsvcid": "8010", 00:27:41.242 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:41.242 "wait_for_attach": false, 00:27:41.242 "attach_timeout_ms": 3000, 00:27:41.242 "method": "bdev_nvme_start_discovery", 00:27:41.242 "req_id": 1 00:27:41.242 } 00:27:41.242 Got JSON-RPC error response 00:27:41.242 response: 00:27:41.242 { 00:27:41.242 "code": -110, 00:27:41.242 "message": "Connection timed out" 00:27:41.242 } 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@656 -- # es=1 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3484464 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.242 rmmod nvme_tcp 00:27:41.242 rmmod nvme_fabrics 00:27:41.242 rmmod nvme_keyring 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 3484300 ']' 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 3484300 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' -z 3484300 ']' 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # kill -0 3484300 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # uname 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3484300 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3484300' 00:27:41.242 killing process with pid 3484300 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # kill 3484300 00:27:41.242 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@977 -- # wait 3484300 00:27:41.502 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:41.502 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:41.502 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:41.502 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:41.502 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:27:41.502 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:41.502 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:27:41.502 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.502 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.503 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.503 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.503 09:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.414 09:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:43.414 00:27:43.414 real 0m20.722s 00:27:43.414 user 0m23.782s 00:27:43.414 sys 0m7.508s 00:27:43.414 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # xtrace_disable 00:27:43.414 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:43.414 ************************************ 00:27:43.414 END TEST nvmf_host_discovery 00:27:43.414 ************************************ 00:27:43.414 09:48:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:43.414 09:48:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:27:43.414 09:48:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:27:43.415 09:48:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.676 ************************************ 00:27:43.676 START TEST nvmf_host_multipath_status 00:27:43.676 ************************************ 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:43.676 * Looking for test storage... 00:27:43.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1626 -- # lcov --version 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:27:43.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.676 --rc genhtml_branch_coverage=1 00:27:43.676 --rc genhtml_function_coverage=1 00:27:43.676 --rc genhtml_legend=1 00:27:43.676 --rc geninfo_all_blocks=1 00:27:43.676 --rc geninfo_unexecuted_blocks=1 00:27:43.676 00:27:43.676 ' 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:27:43.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.676 --rc genhtml_branch_coverage=1 00:27:43.676 --rc genhtml_function_coverage=1 00:27:43.676 --rc genhtml_legend=1 00:27:43.676 --rc geninfo_all_blocks=1 00:27:43.676 --rc geninfo_unexecuted_blocks=1 00:27:43.676 00:27:43.676 ' 00:27:43.676 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:27:43.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.676 --rc genhtml_branch_coverage=1 00:27:43.676 --rc genhtml_function_coverage=1 00:27:43.676 --rc genhtml_legend=1 00:27:43.676 --rc geninfo_all_blocks=1 00:27:43.676 --rc geninfo_unexecuted_blocks=1 00:27:43.676 00:27:43.676 ' 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:27:43.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.677 --rc genhtml_branch_coverage=1 00:27:43.677 --rc genhtml_function_coverage=1 00:27:43.677 --rc genhtml_legend=1 00:27:43.677 --rc geninfo_all_blocks=1 00:27:43.677 --rc geninfo_unexecuted_blocks=1 00:27:43.677 00:27:43.677 ' 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.677 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:43.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:43.938 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:43.939 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:43.939 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.939 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.939 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.939 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:43.939 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:43.939 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.939 09:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:52.076 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:52.076 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:52.076 Found net devices under 0000:31:00.0: cvl_0_0 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.076 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:52.077 Found net devices under 0000:31:00.1: cvl_0_1 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:27:52.077 00:27:52.077 --- 10.0.0.2 ping statistics --- 00:27:52.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.077 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:27:52.077 00:27:52.077 --- 10.0.0.1 ping statistics --- 00:27:52.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.077 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=3490754 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 3490754 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # '[' -z 3490754 ']' 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local max_retries=100 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@843 -- # xtrace_disable 00:27:52.077 09:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:52.077 [2024-10-07 09:48:51.002603] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:27:52.077 [2024-10-07 09:48:51.002687] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.077 [2024-10-07 09:48:51.095069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:52.077 [2024-10-07 09:48:51.189204] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.077 [2024-10-07 09:48:51.189264] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.077 [2024-10-07 09:48:51.189273] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.077 [2024-10-07 09:48:51.189280] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.077 [2024-10-07 09:48:51.189288] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.077 [2024-10-07 09:48:51.190426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.077 [2024-10-07 09:48:51.190427] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.337 09:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:27:52.337 09:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@867 -- # return 0 00:27:52.337 09:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:52.337 09:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@733 -- # xtrace_disable 00:27:52.337 09:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:52.337 09:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.337 09:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3490754 00:27:52.337 09:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:52.598 [2024-10-07 09:48:52.021417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.598 09:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:52.598 Malloc0 00:27:52.858 09:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:52.858 09:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:53.120 09:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.381 [2024-10-07 09:48:52.847779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.381 09:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:53.641 [2024-10-07 09:48:53.048326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:53.641 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3491251 00:27:53.641 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:53.641 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:53.641 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3491251 /var/tmp/bdevperf.sock 00:27:53.641 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # '[' -z 3491251 ']' 00:27:53.641 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.641 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local max_retries=100 00:27:53.641 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.641 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@843 -- # xtrace_disable 00:27:53.641 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:54.638 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:27:54.638 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@867 -- # return 0 00:27:54.638 09:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:54.638 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:54.974 Nvme0n1 00:27:54.974 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:55.282 Nvme0n1 00:27:55.282 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:55.282 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:57.834 09:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:57.834 09:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:57.834 09:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:57.834 09:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:58.777 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:58.777 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:58.777 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.777 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:59.038 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.038 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:59.038 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.038 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:59.038 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:59.038 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:59.038 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.038 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:59.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:59.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:59.559 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.559 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:59.559 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.559 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:59.559 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.559 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:59.559 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.559 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:59.819 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.819 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:59.819 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:00.079 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:00.340 09:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:01.283 09:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:01.283 09:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:01.283 09:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.283 09:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:01.283 09:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:01.283 09:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:01.283 09:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.283 09:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:01.544 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.544 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:01.544 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.544 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:01.804 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.804 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:01.804 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.804 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:02.065 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.065 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:02.065 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.065 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:02.065 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.065 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:02.065 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.065 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:02.326 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.326 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:02.326 09:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:02.587 09:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:02.587 09:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.972 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:04.234 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.234 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:04.234 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.234 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:04.495 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.495 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:04.495 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.495 09:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:04.495 09:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.495 09:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:04.495 09:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:04.495 09:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.755 09:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.755 09:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:04.755 09:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:05.015 09:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:05.015 09:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:06.399 09:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:06.399 09:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:06.399 09:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.399 09:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:06.399 09:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.399 09:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:06.399 09:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.399 09:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:06.399 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:06.399 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:06.399 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.399 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:06.664 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.664 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:06.664 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.664 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:06.925 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.925 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:06.925 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.925 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:07.187 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.187 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:07.187 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.187 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:07.187 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:07.187 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:07.187 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:07.448 09:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:07.710 09:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:08.655 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:08.655 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:08.655 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.655 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:08.915 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:08.915 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:08.915 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.915 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:08.915 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:08.915 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:08.915 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.915 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:09.175 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.175 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:09.176 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.176 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:09.436 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.436 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:09.436 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.436 09:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:09.436 09:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:09.436 09:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:09.436 09:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.436 09:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:09.695 09:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:09.695 09:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:09.696 09:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:09.955 09:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:09.955 09:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.340 09:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:11.601 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:11.601 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:11.601 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.601 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:11.863 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:11.863 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:11.863 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.863 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:12.124 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:12.124 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:12.124 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.124 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:12.124 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.124 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:12.385 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:12.385 09:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:12.645 09:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:12.645 09:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.030 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:14.290 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.290 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:14.291 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:14.291 09:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.551 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.551 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:14.551 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:14.551 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.551 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.551 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:14.551 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.551 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:14.811 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.811 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:14.811 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:15.072 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:15.333 09:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:16.276 09:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:16.276 09:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:16.276 09:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.276 09:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:16.276 09:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:16.276 09:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:16.277 09:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.277 09:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:16.537 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.537 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:16.537 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.537 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:16.798 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.798 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:16.798 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.798 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:17.059 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.059 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:17.059 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.059 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:17.059 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.059 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:17.059 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.059 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:17.320 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.320 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:17.320 09:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:17.582 09:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:17.582 09:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.966 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:19.226 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.227 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:19.227 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.227 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:19.488 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.488 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:19.488 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.488 09:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:19.488 09:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.488 09:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:19.488 09:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.488 09:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:19.750 09:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.750 09:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:19.750 09:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:20.012 09:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:20.273 09:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:21.216 09:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:21.216 09:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:21.216 09:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.216 09:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:21.216 09:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.216 09:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:21.216 09:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.216 09:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:21.478 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:21.478 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:21.478 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.478 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:21.740 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.740 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:21.740 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.740 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:22.001 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.001 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:22.001 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.001 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:22.001 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.001 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:22.001 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.001 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3491251 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' -z 3491251 ']' 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # kill -0 3491251 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # uname 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3491251 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3491251' 00:28:22.262 killing process with pid 3491251 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # kill 3491251 00:28:22.262 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@977 -- # wait 3491251 00:28:22.262 { 00:28:22.262 "results": [ 00:28:22.262 { 00:28:22.262 "job": "Nvme0n1", 00:28:22.262 "core_mask": "0x4", 00:28:22.262 "workload": "verify", 00:28:22.262 "status": "terminated", 00:28:22.262 "verify_range": { 00:28:22.262 "start": 0, 00:28:22.262 "length": 16384 00:28:22.262 }, 00:28:22.262 "queue_depth": 128, 00:28:22.262 "io_size": 4096, 00:28:22.262 "runtime": 26.804123, 00:28:22.262 "iops": 11942.97608617898, 00:28:22.262 "mibps": 46.65225033663664, 00:28:22.262 "io_failed": 0, 00:28:22.262 "io_timeout": 0, 00:28:22.262 "avg_latency_us": 10696.920858800268, 00:28:22.262 "min_latency_us": 798.72, 00:28:22.262 "max_latency_us": 3019898.88 00:28:22.262 } 00:28:22.262 ], 00:28:22.262 "core_count": 1 00:28:22.262 } 00:28:22.526 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3491251 00:28:22.526 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:22.526 [2024-10-07 09:48:53.137264] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:28:22.526 [2024-10-07 09:48:53.137341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491251 ] 00:28:22.526 [2024-10-07 09:48:53.221872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.526 [2024-10-07 09:48:53.312983] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.526 [2024-10-07 09:48:54.791752] bdev_nvme.c:5607:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:28:22.526 Running I/O for 90 seconds... 00:28:22.526 10400.00 IOPS, 40.62 MiB/s 10812.00 IOPS, 42.23 MiB/s 10944.00 IOPS, 42.75 MiB/s 11218.75 IOPS, 43.82 MiB/s 11561.00 IOPS, 45.16 MiB/s 11817.83 IOPS, 46.16 MiB/s 11968.86 IOPS, 46.75 MiB/s 12089.75 IOPS, 47.23 MiB/s 12209.67 IOPS, 47.69 MiB/s 12292.70 IOPS, 48.02 MiB/s 12338.45 IOPS, 48.20 MiB/s [2024-10-07 09:49:06.920085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.526 [2024-10-07 09:49:06.920307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.526 [2024-10-07 09:49:06.920317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.527 [2024-10-07 09:49:06.920542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.920990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.920997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.921010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.921015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.921027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.921032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.921074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.921082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:22.527 [2024-10-07 09:49:06.921095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.527 [2024-10-07 09:49:06.921100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.921534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.921539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.922242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.528 [2024-10-07 09:49:06.922262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.528 [2024-10-07 09:49:06.922282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.528 [2024-10-07 09:49:06.922302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.528 [2024-10-07 09:49:06.922321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.528 [2024-10-07 09:49:06.922341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.528 [2024-10-07 09:49:06.922360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.528 [2024-10-07 09:49:06.922379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.922400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.922419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.922440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.922460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.922479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.528 [2024-10-07 09:49:06.922498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.528 [2024-10-07 09:49:06.922517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.528 [2024-10-07 09:49:06.922537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.528 [2024-10-07 09:49:06.922551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.529 [2024-10-07 09:49:06.922740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.922980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.922985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.529 [2024-10-07 09:49:06.923335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.529 [2024-10-07 09:49:06.923349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:06.923355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:06.923369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:06.923374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:06.923389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:06.923395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.530 12302.33 IOPS, 48.06 MiB/s 11356.00 IOPS, 44.36 MiB/s 10544.86 IOPS, 41.19 MiB/s 9920.53 IOPS, 38.75 MiB/s 10118.25 IOPS, 39.52 MiB/s 10287.41 IOPS, 40.19 MiB/s 10627.11 IOPS, 41.51 MiB/s 10956.32 IOPS, 42.80 MiB/s 11159.65 IOPS, 43.59 MiB/s 11247.95 IOPS, 43.94 MiB/s 11317.00 IOPS, 44.21 MiB/s 11516.70 IOPS, 44.99 MiB/s 11740.04 IOPS, 45.86 MiB/s [2024-10-07 09:49:19.654911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.654946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.654976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.654982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.654994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.655118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.655167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.655184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.655220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.655237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.655459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.655476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.655536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.655541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.656261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.656279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.656295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.656315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.656331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.656346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.656362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.530 [2024-10-07 09:49:19.656377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.656394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.656411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.530 [2024-10-07 09:49:19.656651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:22.530 [2024-10-07 09:49:19.656663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.531 [2024-10-07 09:49:19.656668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:22.531 [2024-10-07 09:49:19.656908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.531 [2024-10-07 09:49:19.656914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:22.531 11886.24 IOPS, 46.43 MiB/s 11917.92 IOPS, 46.55 MiB/s Received shutdown signal, test time was about 26.804767 seconds 00:28:22.531 00:28:22.531 Latency(us) 00:28:22.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.531 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:22.531 Verification LBA range: start 0x0 length 0x4000 00:28:22.531 Nvme0n1 : 26.80 11942.98 46.65 0.00 0.00 10696.92 798.72 3019898.88 00:28:22.531 =================================================================================================================== 00:28:22.531 Total : 11942.98 46.65 0.00 0.00 10696.92 798.72 3019898.88 00:28:22.531 [2024-10-07 09:49:21.851666] app.c:1033:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:28:22.531 09:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:22.531 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:22.531 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:22.531 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:22.531 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:22.531 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:22.531 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.531 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:22.531 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.531 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.531 rmmod nvme_tcp 00:28:22.792 rmmod nvme_fabrics 00:28:22.792 rmmod nvme_keyring 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 3490754 ']' 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 3490754 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' -z 3490754 ']' 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # kill -0 3490754 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # uname 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3490754 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3490754' 00:28:22.792 killing process with pid 3490754 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # kill 3490754 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@977 -- # wait 3490754 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.792 09:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.340 00:28:25.340 real 0m41.424s 00:28:25.340 user 1m46.881s 00:28:25.340 sys 0m11.558s 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # xtrace_disable 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:25.340 ************************************ 00:28:25.340 END TEST nvmf_host_multipath_status 00:28:25.340 ************************************ 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.340 ************************************ 00:28:25.340 START TEST nvmf_discovery_remove_ifc 00:28:25.340 ************************************ 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:25.340 * Looking for test storage... 00:28:25.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1626 -- # lcov --version 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:28:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.340 --rc genhtml_branch_coverage=1 00:28:25.340 --rc genhtml_function_coverage=1 00:28:25.340 --rc genhtml_legend=1 00:28:25.340 --rc geninfo_all_blocks=1 00:28:25.340 --rc geninfo_unexecuted_blocks=1 00:28:25.340 00:28:25.340 ' 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:28:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.340 --rc genhtml_branch_coverage=1 00:28:25.340 --rc genhtml_function_coverage=1 00:28:25.340 --rc genhtml_legend=1 00:28:25.340 --rc geninfo_all_blocks=1 00:28:25.340 --rc geninfo_unexecuted_blocks=1 00:28:25.340 00:28:25.340 ' 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:28:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.340 --rc genhtml_branch_coverage=1 00:28:25.340 --rc genhtml_function_coverage=1 00:28:25.340 --rc genhtml_legend=1 00:28:25.340 --rc geninfo_all_blocks=1 00:28:25.340 --rc geninfo_unexecuted_blocks=1 00:28:25.340 00:28:25.340 ' 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:28:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.340 --rc genhtml_branch_coverage=1 00:28:25.340 --rc genhtml_function_coverage=1 00:28:25.340 --rc genhtml_legend=1 00:28:25.340 --rc geninfo_all_blocks=1 00:28:25.340 --rc geninfo_unexecuted_blocks=1 00:28:25.340 00:28:25.340 ' 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.340 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:25.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.341 09:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:33.482 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:33.482 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:33.482 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:33.483 Found net devices under 0000:31:00.0: cvl_0_0 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:33.483 Found net devices under 0000:31:00.1: cvl_0_1 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:28:33.483 00:28:33.483 --- 10.0.0.2 ping statistics --- 00:28:33.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.483 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:28:33.483 00:28:33.483 --- 10.0.0.1 ping statistics --- 00:28:33.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.483 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=3501225 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 3501225 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # '[' -z 3501225 ']' 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local max_retries=100 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@843 -- # xtrace_disable 00:28:33.483 09:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.483 [2024-10-07 09:49:32.538718] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:28:33.483 [2024-10-07 09:49:32.538784] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.483 [2024-10-07 09:49:32.626892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.483 [2024-10-07 09:49:32.718675] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.483 [2024-10-07 09:49:32.718738] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.484 [2024-10-07 09:49:32.718747] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.484 [2024-10-07 09:49:32.718754] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.484 [2024-10-07 09:49:32.718760] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.484 [2024-10-07 09:49:32.719558] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.745 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:28:33.745 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@867 -- # return 0 00:28:33.745 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:33.745 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@733 -- # xtrace_disable 00:28:33.745 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.745 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.745 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:33.745 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:33.745 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.007 [2024-10-07 09:49:33.412334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.007 [2024-10-07 09:49:33.420613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:34.007 null0 00:28:34.007 [2024-10-07 09:49:33.452553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.007 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:34.007 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3501518 00:28:34.007 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3501518 /tmp/host.sock 00:28:34.007 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:34.007 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # '[' -z 3501518 ']' 00:28:34.007 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local rpc_addr=/tmp/host.sock 00:28:34.007 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local max_retries=100 00:28:34.007 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:34.007 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:34.007 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@843 -- # xtrace_disable 00:28:34.007 09:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.007 [2024-10-07 09:49:33.528346] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:28:34.007 [2024-10-07 09:49:33.528410] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3501518 ] 00:28:34.007 [2024-10-07 09:49:33.610290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.269 [2024-10-07 09:49:33.706350] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@867 -- # return 0 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:34.842 09:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:36.230 [2024-10-07 09:49:35.522848] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:36.230 [2024-10-07 09:49:35.522885] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:36.230 [2024-10-07 09:49:35.522899] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:36.230 [2024-10-07 09:49:35.649296] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:36.230 [2024-10-07 09:49:35.875068] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:36.230 [2024-10-07 09:49:35.875119] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:36.230 [2024-10-07 09:49:35.875140] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:36.230 [2024-10-07 09:49:35.875154] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:36.230 [2024-10-07 09:49:35.875174] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:36.230 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:36.230 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:36.230 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.230 [2024-10-07 09:49:35.880480] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xeb74d0 was disconnected and freed. delete nvme_qpair. 00:28:36.230 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.230 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.230 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:36.230 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.230 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:36.230 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.490 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:36.490 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:36.490 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:36.490 09:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:36.490 09:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:37.875 09:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:37.875 09:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.875 09:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:37.875 09:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:37.875 09:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:37.875 09:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:37.875 09:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:37.875 09:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:37.875 09:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:37.875 09:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:38.818 09:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.818 09:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.818 09:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.818 09:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:38.818 09:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.818 09:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:38.818 09:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.818 09:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:38.818 09:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:38.818 09:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:39.759 09:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:39.759 09:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.759 09:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:39.759 09:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:39.759 09:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:39.759 09:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:39.759 09:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:39.759 09:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:39.759 09:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:39.759 09:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:40.702 09:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:40.702 09:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.702 09:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:40.702 09:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:40.702 09:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:40.702 09:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:40.702 09:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:40.702 09:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:40.702 09:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:40.702 09:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:42.086 [2024-10-07 09:49:41.315840] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:42.086 [2024-10-07 09:49:41.315880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.086 [2024-10-07 09:49:41.315890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.086 [2024-10-07 09:49:41.315897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.086 [2024-10-07 09:49:41.315903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.086 [2024-10-07 09:49:41.315909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.086 [2024-10-07 09:49:41.315914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.086 [2024-10-07 09:49:41.315920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.086 [2024-10-07 09:49:41.315929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.086 [2024-10-07 09:49:41.315936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.086 [2024-10-07 09:49:41.315941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.086 [2024-10-07 09:49:41.315946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe93f00 is same with the state(6) to be set 00:28:42.087 [2024-10-07 09:49:41.325863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe93f00 (9): Bad file descriptor 00:28:42.087 09:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:42.087 09:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.087 09:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:42.087 09:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:42.087 09:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:42.087 09:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:42.087 09:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:42.087 [2024-10-07 09:49:41.335899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:43.032 [2024-10-07 09:49:42.391754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:43.032 [2024-10-07 09:49:42.391845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe93f00 with addr=10.0.0.2, port=4420 00:28:43.032 [2024-10-07 09:49:42.391877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe93f00 is same with the state(6) to be set 00:28:43.032 [2024-10-07 09:49:42.391930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe93f00 (9): Bad file descriptor 00:28:43.032 [2024-10-07 09:49:42.393043] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:43.032 [2024-10-07 09:49:42.393113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:43.032 [2024-10-07 09:49:42.393137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:43.032 [2024-10-07 09:49:42.393160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:43.032 [2024-10-07 09:49:42.393224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.032 [2024-10-07 09:49:42.393249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:43.032 09:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:43.032 09:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:43.032 09:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:44.061 [2024-10-07 09:49:43.395645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:44.061 [2024-10-07 09:49:43.395663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:44.061 [2024-10-07 09:49:43.395669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:44.061 [2024-10-07 09:49:43.395675] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:44.061 [2024-10-07 09:49:43.395684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.061 [2024-10-07 09:49:43.395704] bdev_nvme.c:6915:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:44.061 [2024-10-07 09:49:43.395721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.061 [2024-10-07 09:49:43.395728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.061 [2024-10-07 09:49:43.395736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.061 [2024-10-07 09:49:43.395741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.061 [2024-10-07 09:49:43.395747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.061 [2024-10-07 09:49:43.395753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.061 [2024-10-07 09:49:43.395759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.061 [2024-10-07 09:49:43.395765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.061 [2024-10-07 09:49:43.395771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.061 [2024-10-07 09:49:43.395776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.061 [2024-10-07 09:49:43.395782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:44.061 [2024-10-07 09:49:43.396178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe83640 (9): Bad file descriptor 00:28:44.061 [2024-10-07 09:49:43.397188] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:44.061 [2024-10-07 09:49:43.397197] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:44.061 09:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:45.009 09:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:45.009 09:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:45.009 09:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:45.009 09:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:45.009 09:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:45.009 09:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:45.009 09:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:45.009 09:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:45.270 09:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:45.270 09:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:45.841 [2024-10-07 09:49:45.457591] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:45.841 [2024-10-07 09:49:45.457605] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:45.841 [2024-10-07 09:49:45.457618] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:46.102 [2024-10-07 09:49:45.587000] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:46.102 [2024-10-07 09:49:45.646190] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:46.102 [2024-10-07 09:49:45.646220] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:46.102 [2024-10-07 09:49:45.646236] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:46.102 [2024-10-07 09:49:45.646248] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:46.102 [2024-10-07 09:49:45.646253] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:46.102 [2024-10-07 09:49:45.654471] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe9e400 was disconnected and freed. delete nvme_qpair. 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3501518 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' -z 3501518 ']' 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # kill -0 3501518 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # uname 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:28:46.102 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3501518 00:28:46.362 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:28:46.362 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:28:46.362 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3501518' 00:28:46.362 killing process with pid 3501518 00:28:46.362 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # kill 3501518 00:28:46.362 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@977 -- # wait 3501518 00:28:46.362 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:46.362 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.363 rmmod nvme_tcp 00:28:46.363 rmmod nvme_fabrics 00:28:46.363 rmmod nvme_keyring 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 3501225 ']' 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 3501225 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' -z 3501225 ']' 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # kill -0 3501225 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # uname 00:28:46.363 09:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:28:46.363 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3501225 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3501225' 00:28:46.623 killing process with pid 3501225 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # kill 3501225 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@977 -- # wait 3501225 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.623 09:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.168 00:28:49.168 real 0m23.664s 00:28:49.168 user 0m27.645s 00:28:49.168 sys 0m7.247s 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:49.168 ************************************ 00:28:49.168 END TEST nvmf_discovery_remove_ifc 00:28:49.168 ************************************ 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.168 ************************************ 00:28:49.168 START TEST nvmf_identify_kernel_target 00:28:49.168 ************************************ 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:49.168 * Looking for test storage... 00:28:49.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1626 -- # lcov --version 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:28:49.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.168 --rc genhtml_branch_coverage=1 00:28:49.168 --rc genhtml_function_coverage=1 00:28:49.168 --rc genhtml_legend=1 00:28:49.168 --rc geninfo_all_blocks=1 00:28:49.168 --rc geninfo_unexecuted_blocks=1 00:28:49.168 00:28:49.168 ' 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:28:49.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.168 --rc genhtml_branch_coverage=1 00:28:49.168 --rc genhtml_function_coverage=1 00:28:49.168 --rc genhtml_legend=1 00:28:49.168 --rc geninfo_all_blocks=1 00:28:49.168 --rc geninfo_unexecuted_blocks=1 00:28:49.168 00:28:49.168 ' 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:28:49.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.168 --rc genhtml_branch_coverage=1 00:28:49.168 --rc genhtml_function_coverage=1 00:28:49.168 --rc genhtml_legend=1 00:28:49.168 --rc geninfo_all_blocks=1 00:28:49.168 --rc geninfo_unexecuted_blocks=1 00:28:49.168 00:28:49.168 ' 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:28:49.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.168 --rc genhtml_branch_coverage=1 00:28:49.168 --rc genhtml_function_coverage=1 00:28:49.168 --rc genhtml_legend=1 00:28:49.168 --rc geninfo_all_blocks=1 00:28:49.168 --rc geninfo_unexecuted_blocks=1 00:28:49.168 00:28:49.168 ' 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.168 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:49.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.169 09:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:57.315 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:57.315 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:57.315 Found net devices under 0000:31:00.0: cvl_0_0 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:57.315 Found net devices under 0000:31:00.1: cvl_0_1 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:57.315 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:57.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.720 ms 00:28:57.316 00:28:57.316 --- 10.0.0.2 ping statistics --- 00:28:57.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.316 rtt min/avg/max/mdev = 0.720/0.720/0.720/0.000 ms 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:28:57.316 00:28:57.316 --- 10.0.0.1 ping statistics --- 00:28:57.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.316 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:57.316 09:49:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:00.622 Waiting for block devices as requested 00:29:00.622 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:00.622 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:00.622 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:00.622 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:00.884 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:00.884 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:00.884 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:01.146 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:01.146 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:01.407 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:01.407 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:01.407 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:01.668 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:01.668 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:01.668 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:01.929 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:01.929 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1593 -- # local device=nvme0n1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1595 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1596 -- # [[ none != none ]] 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:02.191 No valid GPT data, bailing 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:29:02.191 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:02.454 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:29:02.454 00:29:02.454 Discovery Log Number of Records 2, Generation counter 2 00:29:02.454 =====Discovery Log Entry 0====== 00:29:02.454 trtype: tcp 00:29:02.454 adrfam: ipv4 00:29:02.454 subtype: current discovery subsystem 00:29:02.454 treq: not specified, sq flow control disable supported 00:29:02.454 portid: 1 00:29:02.454 trsvcid: 4420 00:29:02.454 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:02.454 traddr: 10.0.0.1 00:29:02.454 eflags: none 00:29:02.454 sectype: none 00:29:02.454 =====Discovery Log Entry 1====== 00:29:02.454 trtype: tcp 00:29:02.454 adrfam: ipv4 00:29:02.454 subtype: nvme subsystem 00:29:02.454 treq: not specified, sq flow control disable supported 00:29:02.454 portid: 1 00:29:02.454 trsvcid: 4420 00:29:02.454 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:02.454 traddr: 10.0.0.1 00:29:02.454 eflags: none 00:29:02.454 sectype: none 00:29:02.454 09:50:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:02.454 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:02.454 ===================================================== 00:29:02.454 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:02.454 ===================================================== 00:29:02.454 Controller Capabilities/Features 00:29:02.454 ================================ 00:29:02.454 Vendor ID: 0000 00:29:02.454 Subsystem Vendor ID: 0000 00:29:02.454 Serial Number: e6793507c6d38beefb81 00:29:02.454 Model Number: Linux 00:29:02.454 Firmware Version: 6.8.9-20 00:29:02.454 Recommended Arb Burst: 0 00:29:02.454 IEEE OUI Identifier: 00 00 00 00:29:02.454 Multi-path I/O 00:29:02.454 May have multiple subsystem ports: No 00:29:02.454 May have multiple controllers: No 00:29:02.454 Associated with SR-IOV VF: No 00:29:02.454 Max Data Transfer Size: Unlimited 00:29:02.454 Max Number of Namespaces: 0 00:29:02.454 Max Number of I/O Queues: 1024 00:29:02.454 NVMe Specification Version (VS): 1.3 00:29:02.454 NVMe Specification Version (Identify): 1.3 00:29:02.454 Maximum Queue Entries: 1024 00:29:02.454 Contiguous Queues Required: No 00:29:02.454 Arbitration Mechanisms Supported 00:29:02.454 Weighted Round Robin: Not Supported 00:29:02.454 Vendor Specific: Not Supported 00:29:02.454 Reset Timeout: 7500 ms 00:29:02.454 Doorbell Stride: 4 bytes 00:29:02.454 NVM Subsystem Reset: Not Supported 00:29:02.454 Command Sets Supported 00:29:02.454 NVM Command Set: Supported 00:29:02.454 Boot Partition: Not Supported 00:29:02.454 Memory Page Size Minimum: 4096 bytes 00:29:02.454 Memory Page Size Maximum: 4096 bytes 00:29:02.454 Persistent Memory Region: Not Supported 00:29:02.454 Optional Asynchronous Events Supported 00:29:02.454 Namespace Attribute Notices: Not Supported 00:29:02.454 Firmware Activation Notices: Not Supported 00:29:02.454 ANA Change Notices: Not Supported 00:29:02.454 PLE Aggregate Log Change Notices: Not Supported 00:29:02.454 LBA Status Info Alert Notices: Not Supported 00:29:02.454 EGE Aggregate Log Change Notices: Not Supported 00:29:02.454 Normal NVM Subsystem Shutdown event: Not Supported 00:29:02.454 Zone Descriptor Change Notices: Not Supported 00:29:02.454 Discovery Log Change Notices: Supported 00:29:02.454 Controller Attributes 00:29:02.454 128-bit Host Identifier: Not Supported 00:29:02.454 Non-Operational Permissive Mode: Not Supported 00:29:02.454 NVM Sets: Not Supported 00:29:02.454 Read Recovery Levels: Not Supported 00:29:02.454 Endurance Groups: Not Supported 00:29:02.454 Predictable Latency Mode: Not Supported 00:29:02.454 Traffic Based Keep ALive: Not Supported 00:29:02.454 Namespace Granularity: Not Supported 00:29:02.454 SQ Associations: Not Supported 00:29:02.454 UUID List: Not Supported 00:29:02.454 Multi-Domain Subsystem: Not Supported 00:29:02.454 Fixed Capacity Management: Not Supported 00:29:02.454 Variable Capacity Management: Not Supported 00:29:02.454 Delete Endurance Group: Not Supported 00:29:02.454 Delete NVM Set: Not Supported 00:29:02.454 Extended LBA Formats Supported: Not Supported 00:29:02.454 Flexible Data Placement Supported: Not Supported 00:29:02.454 00:29:02.454 Controller Memory Buffer Support 00:29:02.454 ================================ 00:29:02.454 Supported: No 00:29:02.454 00:29:02.454 Persistent Memory Region Support 00:29:02.454 ================================ 00:29:02.454 Supported: No 00:29:02.454 00:29:02.454 Admin Command Set Attributes 00:29:02.454 ============================ 00:29:02.454 Security Send/Receive: Not Supported 00:29:02.454 Format NVM: Not Supported 00:29:02.454 Firmware Activate/Download: Not Supported 00:29:02.454 Namespace Management: Not Supported 00:29:02.454 Device Self-Test: Not Supported 00:29:02.454 Directives: Not Supported 00:29:02.454 NVMe-MI: Not Supported 00:29:02.454 Virtualization Management: Not Supported 00:29:02.454 Doorbell Buffer Config: Not Supported 00:29:02.454 Get LBA Status Capability: Not Supported 00:29:02.454 Command & Feature Lockdown Capability: Not Supported 00:29:02.454 Abort Command Limit: 1 00:29:02.454 Async Event Request Limit: 1 00:29:02.454 Number of Firmware Slots: N/A 00:29:02.454 Firmware Slot 1 Read-Only: N/A 00:29:02.454 Firmware Activation Without Reset: N/A 00:29:02.454 Multiple Update Detection Support: N/A 00:29:02.454 Firmware Update Granularity: No Information Provided 00:29:02.454 Per-Namespace SMART Log: No 00:29:02.454 Asymmetric Namespace Access Log Page: Not Supported 00:29:02.454 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:02.454 Command Effects Log Page: Not Supported 00:29:02.454 Get Log Page Extended Data: Supported 00:29:02.454 Telemetry Log Pages: Not Supported 00:29:02.454 Persistent Event Log Pages: Not Supported 00:29:02.454 Supported Log Pages Log Page: May Support 00:29:02.454 Commands Supported & Effects Log Page: Not Supported 00:29:02.454 Feature Identifiers & Effects Log Page:May Support 00:29:02.454 NVMe-MI Commands & Effects Log Page: May Support 00:29:02.454 Data Area 4 for Telemetry Log: Not Supported 00:29:02.454 Error Log Page Entries Supported: 1 00:29:02.454 Keep Alive: Not Supported 00:29:02.454 00:29:02.454 NVM Command Set Attributes 00:29:02.454 ========================== 00:29:02.454 Submission Queue Entry Size 00:29:02.454 Max: 1 00:29:02.454 Min: 1 00:29:02.454 Completion Queue Entry Size 00:29:02.454 Max: 1 00:29:02.454 Min: 1 00:29:02.454 Number of Namespaces: 0 00:29:02.454 Compare Command: Not Supported 00:29:02.454 Write Uncorrectable Command: Not Supported 00:29:02.454 Dataset Management Command: Not Supported 00:29:02.455 Write Zeroes Command: Not Supported 00:29:02.455 Set Features Save Field: Not Supported 00:29:02.455 Reservations: Not Supported 00:29:02.455 Timestamp: Not Supported 00:29:02.455 Copy: Not Supported 00:29:02.455 Volatile Write Cache: Not Present 00:29:02.455 Atomic Write Unit (Normal): 1 00:29:02.455 Atomic Write Unit (PFail): 1 00:29:02.455 Atomic Compare & Write Unit: 1 00:29:02.455 Fused Compare & Write: Not Supported 00:29:02.455 Scatter-Gather List 00:29:02.455 SGL Command Set: Supported 00:29:02.455 SGL Keyed: Not Supported 00:29:02.455 SGL Bit Bucket Descriptor: Not Supported 00:29:02.455 SGL Metadata Pointer: Not Supported 00:29:02.455 Oversized SGL: Not Supported 00:29:02.455 SGL Metadata Address: Not Supported 00:29:02.455 SGL Offset: Supported 00:29:02.455 Transport SGL Data Block: Not Supported 00:29:02.455 Replay Protected Memory Block: Not Supported 00:29:02.455 00:29:02.455 Firmware Slot Information 00:29:02.455 ========================= 00:29:02.455 Active slot: 0 00:29:02.455 00:29:02.455 00:29:02.455 Error Log 00:29:02.455 ========= 00:29:02.455 00:29:02.455 Active Namespaces 00:29:02.455 ================= 00:29:02.455 Discovery Log Page 00:29:02.455 ================== 00:29:02.455 Generation Counter: 2 00:29:02.455 Number of Records: 2 00:29:02.455 Record Format: 0 00:29:02.455 00:29:02.455 Discovery Log Entry 0 00:29:02.455 ---------------------- 00:29:02.455 Transport Type: 3 (TCP) 00:29:02.455 Address Family: 1 (IPv4) 00:29:02.455 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:02.455 Entry Flags: 00:29:02.455 Duplicate Returned Information: 0 00:29:02.455 Explicit Persistent Connection Support for Discovery: 0 00:29:02.455 Transport Requirements: 00:29:02.455 Secure Channel: Not Specified 00:29:02.455 Port ID: 1 (0x0001) 00:29:02.455 Controller ID: 65535 (0xffff) 00:29:02.455 Admin Max SQ Size: 32 00:29:02.455 Transport Service Identifier: 4420 00:29:02.455 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:02.455 Transport Address: 10.0.0.1 00:29:02.455 Discovery Log Entry 1 00:29:02.455 ---------------------- 00:29:02.455 Transport Type: 3 (TCP) 00:29:02.455 Address Family: 1 (IPv4) 00:29:02.455 Subsystem Type: 2 (NVM Subsystem) 00:29:02.455 Entry Flags: 00:29:02.455 Duplicate Returned Information: 0 00:29:02.455 Explicit Persistent Connection Support for Discovery: 0 00:29:02.455 Transport Requirements: 00:29:02.455 Secure Channel: Not Specified 00:29:02.455 Port ID: 1 (0x0001) 00:29:02.455 Controller ID: 65535 (0xffff) 00:29:02.455 Admin Max SQ Size: 32 00:29:02.455 Transport Service Identifier: 4420 00:29:02.455 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:02.455 Transport Address: 10.0.0.1 00:29:02.455 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:02.716 get_feature(0x01) failed 00:29:02.716 get_feature(0x02) failed 00:29:02.716 get_feature(0x04) failed 00:29:02.716 ===================================================== 00:29:02.716 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:02.716 ===================================================== 00:29:02.716 Controller Capabilities/Features 00:29:02.716 ================================ 00:29:02.716 Vendor ID: 0000 00:29:02.716 Subsystem Vendor ID: 0000 00:29:02.716 Serial Number: eadec77ce68bbc7aff26 00:29:02.716 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:02.716 Firmware Version: 6.8.9-20 00:29:02.716 Recommended Arb Burst: 6 00:29:02.716 IEEE OUI Identifier: 00 00 00 00:29:02.716 Multi-path I/O 00:29:02.716 May have multiple subsystem ports: Yes 00:29:02.716 May have multiple controllers: Yes 00:29:02.716 Associated with SR-IOV VF: No 00:29:02.716 Max Data Transfer Size: Unlimited 00:29:02.716 Max Number of Namespaces: 1024 00:29:02.716 Max Number of I/O Queues: 128 00:29:02.716 NVMe Specification Version (VS): 1.3 00:29:02.716 NVMe Specification Version (Identify): 1.3 00:29:02.716 Maximum Queue Entries: 1024 00:29:02.716 Contiguous Queues Required: No 00:29:02.716 Arbitration Mechanisms Supported 00:29:02.716 Weighted Round Robin: Not Supported 00:29:02.716 Vendor Specific: Not Supported 00:29:02.716 Reset Timeout: 7500 ms 00:29:02.716 Doorbell Stride: 4 bytes 00:29:02.716 NVM Subsystem Reset: Not Supported 00:29:02.716 Command Sets Supported 00:29:02.716 NVM Command Set: Supported 00:29:02.716 Boot Partition: Not Supported 00:29:02.716 Memory Page Size Minimum: 4096 bytes 00:29:02.716 Memory Page Size Maximum: 4096 bytes 00:29:02.716 Persistent Memory Region: Not Supported 00:29:02.716 Optional Asynchronous Events Supported 00:29:02.716 Namespace Attribute Notices: Supported 00:29:02.716 Firmware Activation Notices: Not Supported 00:29:02.716 ANA Change Notices: Supported 00:29:02.716 PLE Aggregate Log Change Notices: Not Supported 00:29:02.716 LBA Status Info Alert Notices: Not Supported 00:29:02.716 EGE Aggregate Log Change Notices: Not Supported 00:29:02.716 Normal NVM Subsystem Shutdown event: Not Supported 00:29:02.716 Zone Descriptor Change Notices: Not Supported 00:29:02.716 Discovery Log Change Notices: Not Supported 00:29:02.716 Controller Attributes 00:29:02.716 128-bit Host Identifier: Supported 00:29:02.716 Non-Operational Permissive Mode: Not Supported 00:29:02.716 NVM Sets: Not Supported 00:29:02.716 Read Recovery Levels: Not Supported 00:29:02.716 Endurance Groups: Not Supported 00:29:02.716 Predictable Latency Mode: Not Supported 00:29:02.716 Traffic Based Keep ALive: Supported 00:29:02.716 Namespace Granularity: Not Supported 00:29:02.716 SQ Associations: Not Supported 00:29:02.716 UUID List: Not Supported 00:29:02.716 Multi-Domain Subsystem: Not Supported 00:29:02.716 Fixed Capacity Management: Not Supported 00:29:02.716 Variable Capacity Management: Not Supported 00:29:02.716 Delete Endurance Group: Not Supported 00:29:02.716 Delete NVM Set: Not Supported 00:29:02.716 Extended LBA Formats Supported: Not Supported 00:29:02.716 Flexible Data Placement Supported: Not Supported 00:29:02.716 00:29:02.716 Controller Memory Buffer Support 00:29:02.716 ================================ 00:29:02.716 Supported: No 00:29:02.716 00:29:02.716 Persistent Memory Region Support 00:29:02.716 ================================ 00:29:02.716 Supported: No 00:29:02.716 00:29:02.716 Admin Command Set Attributes 00:29:02.716 ============================ 00:29:02.716 Security Send/Receive: Not Supported 00:29:02.716 Format NVM: Not Supported 00:29:02.716 Firmware Activate/Download: Not Supported 00:29:02.716 Namespace Management: Not Supported 00:29:02.716 Device Self-Test: Not Supported 00:29:02.716 Directives: Not Supported 00:29:02.716 NVMe-MI: Not Supported 00:29:02.716 Virtualization Management: Not Supported 00:29:02.716 Doorbell Buffer Config: Not Supported 00:29:02.716 Get LBA Status Capability: Not Supported 00:29:02.716 Command & Feature Lockdown Capability: Not Supported 00:29:02.716 Abort Command Limit: 4 00:29:02.716 Async Event Request Limit: 4 00:29:02.716 Number of Firmware Slots: N/A 00:29:02.716 Firmware Slot 1 Read-Only: N/A 00:29:02.716 Firmware Activation Without Reset: N/A 00:29:02.716 Multiple Update Detection Support: N/A 00:29:02.716 Firmware Update Granularity: No Information Provided 00:29:02.716 Per-Namespace SMART Log: Yes 00:29:02.716 Asymmetric Namespace Access Log Page: Supported 00:29:02.716 ANA Transition Time : 10 sec 00:29:02.716 00:29:02.716 Asymmetric Namespace Access Capabilities 00:29:02.716 ANA Optimized State : Supported 00:29:02.716 ANA Non-Optimized State : Supported 00:29:02.716 ANA Inaccessible State : Supported 00:29:02.716 ANA Persistent Loss State : Supported 00:29:02.716 ANA Change State : Supported 00:29:02.716 ANAGRPID is not changed : No 00:29:02.716 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:02.716 00:29:02.716 ANA Group Identifier Maximum : 128 00:29:02.716 Number of ANA Group Identifiers : 128 00:29:02.716 Max Number of Allowed Namespaces : 1024 00:29:02.716 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:02.716 Command Effects Log Page: Supported 00:29:02.716 Get Log Page Extended Data: Supported 00:29:02.716 Telemetry Log Pages: Not Supported 00:29:02.716 Persistent Event Log Pages: Not Supported 00:29:02.716 Supported Log Pages Log Page: May Support 00:29:02.716 Commands Supported & Effects Log Page: Not Supported 00:29:02.716 Feature Identifiers & Effects Log Page:May Support 00:29:02.716 NVMe-MI Commands & Effects Log Page: May Support 00:29:02.716 Data Area 4 for Telemetry Log: Not Supported 00:29:02.716 Error Log Page Entries Supported: 128 00:29:02.716 Keep Alive: Supported 00:29:02.716 Keep Alive Granularity: 1000 ms 00:29:02.716 00:29:02.716 NVM Command Set Attributes 00:29:02.716 ========================== 00:29:02.716 Submission Queue Entry Size 00:29:02.716 Max: 64 00:29:02.716 Min: 64 00:29:02.716 Completion Queue Entry Size 00:29:02.716 Max: 16 00:29:02.716 Min: 16 00:29:02.716 Number of Namespaces: 1024 00:29:02.716 Compare Command: Not Supported 00:29:02.716 Write Uncorrectable Command: Not Supported 00:29:02.716 Dataset Management Command: Supported 00:29:02.716 Write Zeroes Command: Supported 00:29:02.716 Set Features Save Field: Not Supported 00:29:02.716 Reservations: Not Supported 00:29:02.716 Timestamp: Not Supported 00:29:02.716 Copy: Not Supported 00:29:02.716 Volatile Write Cache: Present 00:29:02.716 Atomic Write Unit (Normal): 1 00:29:02.716 Atomic Write Unit (PFail): 1 00:29:02.716 Atomic Compare & Write Unit: 1 00:29:02.716 Fused Compare & Write: Not Supported 00:29:02.716 Scatter-Gather List 00:29:02.716 SGL Command Set: Supported 00:29:02.716 SGL Keyed: Not Supported 00:29:02.716 SGL Bit Bucket Descriptor: Not Supported 00:29:02.716 SGL Metadata Pointer: Not Supported 00:29:02.716 Oversized SGL: Not Supported 00:29:02.716 SGL Metadata Address: Not Supported 00:29:02.716 SGL Offset: Supported 00:29:02.716 Transport SGL Data Block: Not Supported 00:29:02.716 Replay Protected Memory Block: Not Supported 00:29:02.716 00:29:02.716 Firmware Slot Information 00:29:02.716 ========================= 00:29:02.716 Active slot: 0 00:29:02.716 00:29:02.716 Asymmetric Namespace Access 00:29:02.716 =========================== 00:29:02.716 Change Count : 0 00:29:02.716 Number of ANA Group Descriptors : 1 00:29:02.716 ANA Group Descriptor : 0 00:29:02.716 ANA Group ID : 1 00:29:02.716 Number of NSID Values : 1 00:29:02.716 Change Count : 0 00:29:02.716 ANA State : 1 00:29:02.716 Namespace Identifier : 1 00:29:02.716 00:29:02.716 Commands Supported and Effects 00:29:02.716 ============================== 00:29:02.716 Admin Commands 00:29:02.716 -------------- 00:29:02.716 Get Log Page (02h): Supported 00:29:02.716 Identify (06h): Supported 00:29:02.716 Abort (08h): Supported 00:29:02.716 Set Features (09h): Supported 00:29:02.716 Get Features (0Ah): Supported 00:29:02.716 Asynchronous Event Request (0Ch): Supported 00:29:02.716 Keep Alive (18h): Supported 00:29:02.716 I/O Commands 00:29:02.716 ------------ 00:29:02.716 Flush (00h): Supported 00:29:02.716 Write (01h): Supported LBA-Change 00:29:02.716 Read (02h): Supported 00:29:02.716 Write Zeroes (08h): Supported LBA-Change 00:29:02.716 Dataset Management (09h): Supported 00:29:02.716 00:29:02.716 Error Log 00:29:02.716 ========= 00:29:02.716 Entry: 0 00:29:02.716 Error Count: 0x3 00:29:02.716 Submission Queue Id: 0x0 00:29:02.716 Command Id: 0x5 00:29:02.716 Phase Bit: 0 00:29:02.716 Status Code: 0x2 00:29:02.716 Status Code Type: 0x0 00:29:02.716 Do Not Retry: 1 00:29:02.716 Error Location: 0x28 00:29:02.716 LBA: 0x0 00:29:02.716 Namespace: 0x0 00:29:02.716 Vendor Log Page: 0x0 00:29:02.716 ----------- 00:29:02.716 Entry: 1 00:29:02.716 Error Count: 0x2 00:29:02.716 Submission Queue Id: 0x0 00:29:02.716 Command Id: 0x5 00:29:02.716 Phase Bit: 0 00:29:02.716 Status Code: 0x2 00:29:02.716 Status Code Type: 0x0 00:29:02.716 Do Not Retry: 1 00:29:02.716 Error Location: 0x28 00:29:02.716 LBA: 0x0 00:29:02.716 Namespace: 0x0 00:29:02.716 Vendor Log Page: 0x0 00:29:02.716 ----------- 00:29:02.716 Entry: 2 00:29:02.716 Error Count: 0x1 00:29:02.716 Submission Queue Id: 0x0 00:29:02.716 Command Id: 0x4 00:29:02.716 Phase Bit: 0 00:29:02.716 Status Code: 0x2 00:29:02.716 Status Code Type: 0x0 00:29:02.716 Do Not Retry: 1 00:29:02.716 Error Location: 0x28 00:29:02.716 LBA: 0x0 00:29:02.716 Namespace: 0x0 00:29:02.716 Vendor Log Page: 0x0 00:29:02.716 00:29:02.716 Number of Queues 00:29:02.716 ================ 00:29:02.716 Number of I/O Submission Queues: 128 00:29:02.716 Number of I/O Completion Queues: 128 00:29:02.716 00:29:02.716 ZNS Specific Controller Data 00:29:02.716 ============================ 00:29:02.716 Zone Append Size Limit: 0 00:29:02.716 00:29:02.716 00:29:02.716 Active Namespaces 00:29:02.716 ================= 00:29:02.716 get_feature(0x05) failed 00:29:02.716 Namespace ID:1 00:29:02.716 Command Set Identifier: NVM (00h) 00:29:02.716 Deallocate: Supported 00:29:02.716 Deallocated/Unwritten Error: Not Supported 00:29:02.716 Deallocated Read Value: Unknown 00:29:02.716 Deallocate in Write Zeroes: Not Supported 00:29:02.716 Deallocated Guard Field: 0xFFFF 00:29:02.716 Flush: Supported 00:29:02.716 Reservation: Not Supported 00:29:02.716 Namespace Sharing Capabilities: Multiple Controllers 00:29:02.716 Size (in LBAs): 3750748848 (1788GiB) 00:29:02.716 Capacity (in LBAs): 3750748848 (1788GiB) 00:29:02.716 Utilization (in LBAs): 3750748848 (1788GiB) 00:29:02.716 UUID: 7ad77caf-0278-40ce-a829-65061e6b8228 00:29:02.716 Thin Provisioning: Not Supported 00:29:02.716 Per-NS Atomic Units: Yes 00:29:02.716 Atomic Write Unit (Normal): 8 00:29:02.717 Atomic Write Unit (PFail): 8 00:29:02.717 Preferred Write Granularity: 8 00:29:02.717 Atomic Compare & Write Unit: 8 00:29:02.717 Atomic Boundary Size (Normal): 0 00:29:02.717 Atomic Boundary Size (PFail): 0 00:29:02.717 Atomic Boundary Offset: 0 00:29:02.717 NGUID/EUI64 Never Reused: No 00:29:02.717 ANA group ID: 1 00:29:02.717 Namespace Write Protected: No 00:29:02.717 Number of LBA Formats: 1 00:29:02.717 Current LBA Format: LBA Format #00 00:29:02.717 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:02.717 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.717 rmmod nvme_tcp 00:29:02.717 rmmod nvme_fabrics 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.717 09:50:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.632 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.632 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:04.632 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:04.632 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:29:04.894 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:04.894 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:04.894 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:04.894 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:04.894 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:29:04.894 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:29:04.894 09:50:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:09.108 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:09.108 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:09.108 00:29:09.108 real 0m20.228s 00:29:09.108 user 0m5.337s 00:29:09.108 sys 0m11.890s 00:29:09.108 09:50:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # xtrace_disable 00:29:09.108 09:50:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.108 ************************************ 00:29:09.108 END TEST nvmf_identify_kernel_target 00:29:09.108 ************************************ 00:29:09.108 09:50:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:09.108 09:50:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:29:09.108 09:50:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:29:09.108 09:50:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.108 ************************************ 00:29:09.108 START TEST nvmf_auth_host 00:29:09.108 ************************************ 00:29:09.108 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:09.370 * Looking for test storage... 00:29:09.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1626 -- # lcov --version 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:29:09.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.370 --rc genhtml_branch_coverage=1 00:29:09.370 --rc genhtml_function_coverage=1 00:29:09.370 --rc genhtml_legend=1 00:29:09.370 --rc geninfo_all_blocks=1 00:29:09.370 --rc geninfo_unexecuted_blocks=1 00:29:09.370 00:29:09.370 ' 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:29:09.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.370 --rc genhtml_branch_coverage=1 00:29:09.370 --rc genhtml_function_coverage=1 00:29:09.370 --rc genhtml_legend=1 00:29:09.370 --rc geninfo_all_blocks=1 00:29:09.370 --rc geninfo_unexecuted_blocks=1 00:29:09.370 00:29:09.370 ' 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:29:09.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.370 --rc genhtml_branch_coverage=1 00:29:09.370 --rc genhtml_function_coverage=1 00:29:09.370 --rc genhtml_legend=1 00:29:09.370 --rc geninfo_all_blocks=1 00:29:09.370 --rc geninfo_unexecuted_blocks=1 00:29:09.370 00:29:09.370 ' 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:29:09.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.370 --rc genhtml_branch_coverage=1 00:29:09.370 --rc genhtml_function_coverage=1 00:29:09.370 --rc genhtml_legend=1 00:29:09.370 --rc geninfo_all_blocks=1 00:29:09.370 --rc geninfo_unexecuted_blocks=1 00:29:09.370 00:29:09.370 ' 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:09.370 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:09.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.371 09:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:17.519 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:17.519 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:17.519 Found net devices under 0000:31:00.0: cvl_0_0 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:17.519 Found net devices under 0000:31:00.1: cvl_0_1 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:29:17.519 00:29:17.519 --- 10.0.0.2 ping statistics --- 00:29:17.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.519 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:29:17.519 00:29:17.519 --- 10.0.0.1 ping statistics --- 00:29:17.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.519 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=3516129 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 3516129 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # '[' -z 3516129 ']' 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local max_retries=100 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@843 -- # xtrace_disable 00:29:17.519 09:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@867 -- # return 0 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@733 -- # xtrace_disable 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=fed7f055bad5839b2c61efd160d2e72c 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.ds8 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key fed7f055bad5839b2c61efd160d2e72c 0 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 fed7f055bad5839b2c61efd160d2e72c 0 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=fed7f055bad5839b2c61efd160d2e72c 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.ds8 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.ds8 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ds8 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=15f6ca527c7886859cd9a900bdfa64e4b599dbb0de2b7c0874410499ac3aaf73 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.ega 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 15f6ca527c7886859cd9a900bdfa64e4b599dbb0de2b7c0874410499ac3aaf73 3 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 15f6ca527c7886859cd9a900bdfa64e4b599dbb0de2b7c0874410499ac3aaf73 3 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=15f6ca527c7886859cd9a900bdfa64e4b599dbb0de2b7c0874410499ac3aaf73 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.ega 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.ega 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ega 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e85ec43c5186a14eeccd5d895b0e22d16f4dfeea8f76716a 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.VJP 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e85ec43c5186a14eeccd5d895b0e22d16f4dfeea8f76716a 0 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e85ec43c5186a14eeccd5d895b0e22d16f4dfeea8f76716a 0 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e85ec43c5186a14eeccd5d895b0e22d16f4dfeea8f76716a 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:29:18.092 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.VJP 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.VJP 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.VJP 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=535a6a172ddcacbc133a6d438f6351ff0b6540d0a593c181 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.CnO 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 535a6a172ddcacbc133a6d438f6351ff0b6540d0a593c181 2 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 535a6a172ddcacbc133a6d438f6351ff0b6540d0a593c181 2 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=535a6a172ddcacbc133a6d438f6351ff0b6540d0a593c181 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.CnO 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.CnO 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.CnO 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:18.354 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ce5602dd2eb22e208e755f3f5d695fac 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.xqT 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ce5602dd2eb22e208e755f3f5d695fac 1 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ce5602dd2eb22e208e755f3f5d695fac 1 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ce5602dd2eb22e208e755f3f5d695fac 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.xqT 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.xqT 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xqT 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6c71c4db2a1b70b43532143c73925b59 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.31D 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6c71c4db2a1b70b43532143c73925b59 1 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6c71c4db2a1b70b43532143c73925b59 1 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6c71c4db2a1b70b43532143c73925b59 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:18.355 09:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.31D 00:29:18.355 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.31D 00:29:18.355 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.31D 00:29:18.355 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:18.355 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:18.355 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:18.355 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:18.355 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:29:18.355 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:29:18.355 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:18.622 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8826095250b096dc262b77c014e74977ceed23a883d7f42e 00:29:18.622 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:29:18.622 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.e3F 00:29:18.622 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8826095250b096dc262b77c014e74977ceed23a883d7f42e 2 00:29:18.622 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8826095250b096dc262b77c014e74977ceed23a883d7f42e 2 00:29:18.622 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8826095250b096dc262b77c014e74977ceed23a883d7f42e 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.e3F 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.e3F 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.e3F 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a26105e86a5245c58970e6fa2b71f10e 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.7Kc 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a26105e86a5245c58970e6fa2b71f10e 0 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a26105e86a5245c58970e6fa2b71f10e 0 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a26105e86a5245c58970e6fa2b71f10e 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.7Kc 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.7Kc 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.7Kc 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0e2ff7e17a0b73b59c9d8f51a2396be86ba7da9bb74700d8f60fe81fd90977ad 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Of3 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0e2ff7e17a0b73b59c9d8f51a2396be86ba7da9bb74700d8f60fe81fd90977ad 3 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0e2ff7e17a0b73b59c9d8f51a2396be86ba7da9bb74700d8f60fe81fd90977ad 3 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0e2ff7e17a0b73b59c9d8f51a2396be86ba7da9bb74700d8f60fe81fd90977ad 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Of3 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Of3 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Of3 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3516129 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # '[' -z 3516129 ']' 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local max_retries=100 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@843 -- # xtrace_disable 00:29:18.623 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@867 -- # return 0 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ds8 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ega ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ega 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.VJP 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.CnO ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CnO 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xqT 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.31D ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.31D 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.e3F 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.7Kc ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.7Kc 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Of3 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:29:18.885 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:29:19.146 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:19.146 09:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:22.454 Waiting for block devices as requested 00:29:22.454 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:22.715 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:22.715 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:22.715 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:22.976 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:22.976 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:22.976 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:22.976 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:23.237 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:23.237 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:23.497 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:23.497 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:23.497 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:23.497 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:23.757 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:23.757 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:23.757 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1593 -- # local device=nvme0n1 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1595 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1596 -- # [[ none != none ]] 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:24.701 No valid GPT data, bailing 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:29:24.701 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:29:24.963 00:29:24.963 Discovery Log Number of Records 2, Generation counter 2 00:29:24.963 =====Discovery Log Entry 0====== 00:29:24.963 trtype: tcp 00:29:24.963 adrfam: ipv4 00:29:24.963 subtype: current discovery subsystem 00:29:24.963 treq: not specified, sq flow control disable supported 00:29:24.963 portid: 1 00:29:24.963 trsvcid: 4420 00:29:24.963 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:24.963 traddr: 10.0.0.1 00:29:24.963 eflags: none 00:29:24.963 sectype: none 00:29:24.963 =====Discovery Log Entry 1====== 00:29:24.963 trtype: tcp 00:29:24.963 adrfam: ipv4 00:29:24.963 subtype: nvme subsystem 00:29:24.963 treq: not specified, sq flow control disable supported 00:29:24.963 portid: 1 00:29:24.963 trsvcid: 4420 00:29:24.963 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:24.963 traddr: 10.0.0.1 00:29:24.963 eflags: none 00:29:24.963 sectype: none 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:24.963 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:24.964 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.224 nvme0n1 00:29:25.224 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.224 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.224 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.224 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.224 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.224 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.225 nvme0n1 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.225 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:25.486 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.487 09:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.487 nvme0n1 00:29:25.487 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.487 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.487 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.487 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.487 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.487 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.487 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.487 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.487 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.487 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.749 nvme0n1 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.749 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.012 nvme0n1 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.012 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.275 nvme0n1 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.275 09:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.536 nvme0n1 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.536 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.798 nvme0n1 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.798 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.060 nvme0n1 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.060 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.322 nvme0n1 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.322 09:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.582 nvme0n1 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.583 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.846 nvme0n1 00:29:27.846 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.846 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.846 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.846 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.846 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.846 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:27.846 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.846 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.846 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:27.846 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:28.107 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.108 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.369 nvme0n1 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.369 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:28.370 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:28.370 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:28.370 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:28.370 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.370 09:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.630 nvme0n1 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:28.630 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.631 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.906 nvme0n1 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:28.906 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.907 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.166 nvme0n1 00:29:29.166 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.166 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.166 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.166 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.166 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.428 09:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.689 nvme0n1 00:29:29.690 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.690 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.690 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.690 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.690 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.690 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.952 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.213 nvme0n1 00:29:30.213 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.213 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.213 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.213 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.213 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.213 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.213 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.213 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.213 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.213 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:30.475 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.476 09:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.737 nvme0n1 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.737 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:30.738 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.999 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:30.999 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:30.999 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:31.000 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:31.000 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:31.000 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.261 nvme0n1 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:31.261 09:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.834 nvme0n1 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:31.834 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.406 nvme0n1 00:29:32.406 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:32.406 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.406 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.406 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:32.406 09:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:32.406 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:32.667 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.239 nvme0n1 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:33.239 09:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.810 nvme0n1 00:29:33.810 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:33.810 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.810 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.810 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:33.810 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.810 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:34.070 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:34.071 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.071 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.071 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:34.071 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.071 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:34.071 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:34.071 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:34.071 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:34.071 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.071 09:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.642 nvme0n1 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.642 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.242 nvme0n1 00:29:35.242 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.242 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.242 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.242 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.242 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.242 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.502 09:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.502 nvme0n1 00:29:35.502 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.502 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.502 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.502 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.502 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.502 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:35.763 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.764 nvme0n1 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.764 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.024 nvme0n1 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:36.024 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.025 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.285 nvme0n1 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.285 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.286 09:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.546 nvme0n1 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:36.546 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.547 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.808 nvme0n1 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.808 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.069 nvme0n1 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.069 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.330 nvme0n1 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.330 09:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.591 nvme0n1 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:37.591 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:37.592 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.592 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.875 nvme0n1 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:37.875 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:37.876 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.876 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.182 nvme0n1 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:38.183 09:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.485 nvme0n1 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:38.485 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.746 nvme0n1 00:29:38.746 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:38.746 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.746 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.746 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:38.746 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.007 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.268 nvme0n1 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.268 09:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.529 nvme0n1 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:39.529 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:39.530 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.530 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.530 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:39.530 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.530 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:39.530 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:39.530 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:39.530 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:39.530 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:39.530 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.102 nvme0n1 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:40.102 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.103 09:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.674 nvme0n1 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.674 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.935 nvme0n1 00:29:40.935 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.935 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.935 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.935 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.935 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.195 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.195 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.195 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.196 09:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.457 nvme0n1 00:29:41.457 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.457 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.457 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.457 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.457 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.457 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.719 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.980 nvme0n1 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:41.980 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.241 09:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.812 nvme0n1 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.812 09:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.385 nvme0n1 00:29:43.385 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:43.385 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.385 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.385 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:43.385 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.385 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:43.646 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.218 nvme0n1 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:44.218 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:44.219 09:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.167 nvme0n1 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.167 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:45.168 09:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.739 nvme0n1 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:45.739 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.000 nvme0n1 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.000 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.261 nvme0n1 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:46.261 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.262 nvme0n1 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.262 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.523 09:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.523 nvme0n1 00:29:46.523 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.523 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.523 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.523 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.523 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.523 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.523 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.523 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.523 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.523 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:46.784 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.785 nvme0n1 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.785 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.046 nvme0n1 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.046 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.306 nvme0n1 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.306 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.568 09:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.568 nvme0n1 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.568 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.832 nvme0n1 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:47.832 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.092 nvme0n1 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.092 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.093 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.093 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.093 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.093 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.353 09:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.615 nvme0n1 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.615 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.881 nvme0n1 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.881 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.882 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.145 nvme0n1 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.145 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.146 09:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.406 nvme0n1 00:29:49.406 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.406 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.406 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.406 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.406 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:49.667 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:49.668 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.668 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.668 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:49.668 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.668 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:49.668 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:49.668 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:49.668 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:49.668 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.668 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.929 nvme0n1 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:49.929 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.930 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.930 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:49.930 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.930 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:49.930 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:49.930 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:49.930 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.930 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.930 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.501 nvme0n1 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:50.501 09:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.763 nvme0n1 00:29:50.763 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:50.763 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.763 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.763 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:50.763 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.763 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:51.024 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.285 nvme0n1 00:29:51.285 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:51.285 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.285 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.285 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:51.285 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.285 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:51.285 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.285 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.285 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:51.285 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:51.547 09:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.808 nvme0n1 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.808 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.068 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.068 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:52.068 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:52.068 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:52.068 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.068 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.069 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:52.069 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.069 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:52.069 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:52.069 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:52.069 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:52.069 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.069 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.330 nvme0n1 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmVkN2YwNTViYWQ1ODM5YjJjNjFlZmQxNjBkMmU3MmOrXJKX: 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: ]] 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTVmNmNhNTI3Yzc4ODY4NTljZDlhOTAwYmRmYTY0ZTRiNTk5ZGJiMGRlMmI3YzA4NzQ0MTA0OTlhYzNhYWY3M5KpTDQ=: 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.330 09:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.272 nvme0n1 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:53.272 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:53.273 09:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.843 nvme0n1 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:53.843 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:53.844 09:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.416 nvme0n1 00:29:54.416 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:54.416 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.416 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.416 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:54.416 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyNjA5NTI1MGIwOTZkYzI2MmI3N2MwMTRlNzQ5NzdjZWVkMjNhODgzZDdmNDJlmSd03Q==: 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: ]] 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI2MTA1ZTg2YTUyNDVjNTg5NzBlNmZhMmI3MWYxMGVMWs7r: 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:54.678 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.249 nvme0n1 00:29:55.249 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:55.249 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.249 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.249 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:55.249 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.249 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:55.249 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.249 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUyZmY3ZTE3YTBiNzNiNTljOWQ4ZjUxYTIzOTZiZTg2YmE3ZGE5YmI3NDcwMGQ4ZjYwZmU4MWZkOTA5NzdhZMD+IZE=: 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:55.250 09:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.194 nvme0n1 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:56.194 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # local es=0 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@656 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.195 request: 00:29:56.195 { 00:29:56.195 "name": "nvme0", 00:29:56.195 "trtype": "tcp", 00:29:56.195 "traddr": "10.0.0.1", 00:29:56.195 "adrfam": "ipv4", 00:29:56.195 "trsvcid": "4420", 00:29:56.195 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:56.195 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:56.195 "prchk_reftag": false, 00:29:56.195 "prchk_guard": false, 00:29:56.195 "hdgst": false, 00:29:56.195 "ddgst": false, 00:29:56.195 "allow_unrecognized_csi": false, 00:29:56.195 "method": "bdev_nvme_attach_controller", 00:29:56.195 "req_id": 1 00:29:56.195 } 00:29:56.195 Got JSON-RPC error response 00:29:56.195 response: 00:29:56.195 { 00:29:56.195 "code": -5, 00:29:56.195 "message": "Input/output error" 00:29:56.195 } 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@656 -- # es=1 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # local es=0 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@656 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.195 request: 00:29:56.195 { 00:29:56.195 "name": "nvme0", 00:29:56.195 "trtype": "tcp", 00:29:56.195 "traddr": "10.0.0.1", 00:29:56.195 "adrfam": "ipv4", 00:29:56.195 "trsvcid": "4420", 00:29:56.195 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:56.195 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:56.195 "prchk_reftag": false, 00:29:56.195 "prchk_guard": false, 00:29:56.195 "hdgst": false, 00:29:56.195 "ddgst": false, 00:29:56.195 "dhchap_key": "key2", 00:29:56.195 "allow_unrecognized_csi": false, 00:29:56.195 "method": "bdev_nvme_attach_controller", 00:29:56.195 "req_id": 1 00:29:56.195 } 00:29:56.195 Got JSON-RPC error response 00:29:56.195 response: 00:29:56.195 { 00:29:56.195 "code": -5, 00:29:56.195 "message": "Input/output error" 00:29:56.195 } 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@656 -- # es=1 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # local es=0 00:29:56.195 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:56.196 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:29:56.196 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:56.196 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:29:56.196 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:56.196 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@656 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:56.196 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.196 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.457 request: 00:29:56.457 { 00:29:56.457 "name": "nvme0", 00:29:56.457 "trtype": "tcp", 00:29:56.457 "traddr": "10.0.0.1", 00:29:56.457 "adrfam": "ipv4", 00:29:56.457 "trsvcid": "4420", 00:29:56.457 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:56.457 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:56.457 "prchk_reftag": false, 00:29:56.457 "prchk_guard": false, 00:29:56.457 "hdgst": false, 00:29:56.457 "ddgst": false, 00:29:56.457 "dhchap_key": "key1", 00:29:56.457 "dhchap_ctrlr_key": "ckey2", 00:29:56.457 "allow_unrecognized_csi": false, 00:29:56.457 "method": "bdev_nvme_attach_controller", 00:29:56.457 "req_id": 1 00:29:56.457 } 00:29:56.457 Got JSON-RPC error response 00:29:56.457 response: 00:29:56.457 { 00:29:56.457 "code": -5, 00:29:56.457 "message": "Input/output error" 00:29:56.457 } 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@656 -- # es=1 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.457 09:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.457 nvme0n1 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.457 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.718 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.718 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.718 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:56.718 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # local es=0 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@656 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.719 request: 00:29:56.719 { 00:29:56.719 "name": "nvme0", 00:29:56.719 "dhchap_key": "key1", 00:29:56.719 "dhchap_ctrlr_key": "ckey2", 00:29:56.719 "method": "bdev_nvme_set_keys", 00:29:56.719 "req_id": 1 00:29:56.719 } 00:29:56.719 Got JSON-RPC error response 00:29:56.719 response: 00:29:56.719 { 00:29:56.719 "code": -13, 00:29:56.719 "message": "Permission denied" 00:29:56.719 } 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@656 -- # es=1 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:56.719 09:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:57.660 09:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.660 09:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:57.660 09:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:57.660 09:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.921 09:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:57.921 09:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:57.921 09:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1ZWM0M2M1MTg2YTE0ZWVjY2Q1ZDg5NWIwZTIyZDE2ZjRkZmVlYThmNzY3MTZhR0oopw==: 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: ]] 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM1YTZhMTcyZGRjYWNiYzEzM2E2ZDQzOGY2MzUxZmYwYjY1NDBkMGE1OTNjMTgxoSRwBw==: 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:58.863 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.124 nvme0n1 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1NjAyZGQyZWIyMmUyMDhlNzU1ZjNmNWQ2OTVmYWMNKbNW: 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: ]] 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmM3MWM0ZGIyYTFiNzBiNDM1MzIxNDNjNzM5MjViNTnaws9R: 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # local es=0 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@656 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:59.124 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.124 request: 00:29:59.124 { 00:29:59.124 "name": "nvme0", 00:29:59.124 "dhchap_key": "key2", 00:29:59.124 "dhchap_ctrlr_key": "ckey1", 00:29:59.124 "method": "bdev_nvme_set_keys", 00:29:59.125 "req_id": 1 00:29:59.125 } 00:29:59.125 Got JSON-RPC error response 00:29:59.125 response: 00:29:59.125 { 00:29:59.125 "code": -13, 00:29:59.125 "message": "Permission denied" 00:29:59.125 } 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@656 -- # es=1 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:59.125 09:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:30:00.071 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.071 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:00.071 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:00.071 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.071 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:00.332 rmmod nvme_tcp 00:30:00.332 rmmod nvme_fabrics 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 3516129 ']' 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 3516129 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' -z 3516129 ']' 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # kill -0 3516129 00:30:00.332 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # uname 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3516129 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3516129' 00:30:00.333 killing process with pid 3516129 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # kill 3516129 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@977 -- # wait 3516129 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.333 09:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:30:02.883 09:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:06.189 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:06.189 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:06.189 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:06.189 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:06.189 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:06.189 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:06.189 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:06.189 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:06.189 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:06.189 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:06.450 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:06.450 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:06.450 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:06.450 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:06.450 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:06.450 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:06.450 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:06.711 09:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ds8 /tmp/spdk.key-null.VJP /tmp/spdk.key-sha256.xqT /tmp/spdk.key-sha384.e3F /tmp/spdk.key-sha512.Of3 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:06.711 09:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:10.927 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:10.927 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:10.927 00:30:10.927 real 1m1.610s 00:30:10.927 user 0m55.123s 00:30:10.927 sys 0m16.576s 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # xtrace_disable 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.927 ************************************ 00:30:10.927 END TEST nvmf_auth_host 00:30:10.927 ************************************ 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.927 ************************************ 00:30:10.927 START TEST nvmf_digest 00:30:10.927 ************************************ 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:10.927 * Looking for test storage... 00:30:10.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1626 -- # lcov --version 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.927 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:30:10.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.928 --rc genhtml_branch_coverage=1 00:30:10.928 --rc genhtml_function_coverage=1 00:30:10.928 --rc genhtml_legend=1 00:30:10.928 --rc geninfo_all_blocks=1 00:30:10.928 --rc geninfo_unexecuted_blocks=1 00:30:10.928 00:30:10.928 ' 00:30:10.928 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:30:10.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.928 --rc genhtml_branch_coverage=1 00:30:10.928 --rc genhtml_function_coverage=1 00:30:10.928 --rc genhtml_legend=1 00:30:10.928 --rc geninfo_all_blocks=1 00:30:10.928 --rc geninfo_unexecuted_blocks=1 00:30:10.928 00:30:10.928 ' 00:30:10.928 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:30:10.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.928 --rc genhtml_branch_coverage=1 00:30:10.928 --rc genhtml_function_coverage=1 00:30:10.928 --rc genhtml_legend=1 00:30:10.928 --rc geninfo_all_blocks=1 00:30:10.928 --rc geninfo_unexecuted_blocks=1 00:30:10.928 00:30:10.928 ' 00:30:10.928 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:30:10.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.928 --rc genhtml_branch_coverage=1 00:30:10.928 --rc genhtml_function_coverage=1 00:30:10.928 --rc genhtml_legend=1 00:30:10.928 --rc geninfo_all_blocks=1 00:30:10.928 --rc geninfo_unexecuted_blocks=1 00:30:10.928 00:30:10.928 ' 00:30:10.928 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.190 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:11.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:11.191 09:51:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:19.343 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:19.343 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:19.343 Found net devices under 0000:31:00.0: cvl_0_0 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:19.343 Found net devices under 0000:31:00.1: cvl_0_1 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:19.343 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:19.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:19.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:30:19.344 00:30:19.344 --- 10.0.0.2 ping statistics --- 00:30:19.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.344 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:19.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:19.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:30:19.344 00:30:19.344 --- 10.0.0.1 ping statistics --- 00:30:19.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.344 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1110 -- # xtrace_disable 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:19.344 ************************************ 00:30:19.344 START TEST nvmf_digest_clean 00:30:19.344 ************************************ 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # run_digest 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=3533986 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 3533986 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # '[' -z 3533986 ']' 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:19.344 09:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.344 [2024-10-07 09:51:18.476001] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:19.344 [2024-10-07 09:51:18.476063] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.344 [2024-10-07 09:51:18.566838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.344 [2024-10-07 09:51:18.660048] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.344 [2024-10-07 09:51:18.660110] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.344 [2024-10-07 09:51:18.660119] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:19.344 [2024-10-07 09:51:18.660126] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:19.344 [2024-10-07 09:51:18.660132] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.344 [2024-10-07 09:51:18.661001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@867 -- # return 0 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@733 -- # xtrace_disable 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.917 null0 00:30:19.917 [2024-10-07 09:51:19.428786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.917 [2024-10-07 09:51:19.453106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3534260 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3534260 /var/tmp/bperf.sock 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # '[' -z 3534260 ']' 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:19.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:19.917 09:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.917 [2024-10-07 09:51:19.512008] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:19.917 [2024-10-07 09:51:19.512075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3534260 ] 00:30:20.179 [2024-10-07 09:51:19.593539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.179 [2024-10-07 09:51:19.688115] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.752 09:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:20.752 09:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@867 -- # return 0 00:30:20.752 09:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:20.752 09:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:20.752 09:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:21.013 09:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.013 09:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.585 nvme0n1 00:30:21.585 09:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:21.585 09:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:21.585 Running I/O for 2 seconds... 00:30:23.473 18655.00 IOPS, 72.87 MiB/s 20849.00 IOPS, 81.44 MiB/s 00:30:23.473 Latency(us) 00:30:23.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.473 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:23.473 nvme0n1 : 2.00 20862.70 81.49 0.00 0.00 6128.79 2703.36 18677.76 00:30:23.473 =================================================================================================================== 00:30:23.473 Total : 20862.70 81.49 0.00 0.00 6128.79 2703.36 18677.76 00:30:23.473 { 00:30:23.473 "results": [ 00:30:23.473 { 00:30:23.473 "job": "nvme0n1", 00:30:23.473 "core_mask": "0x2", 00:30:23.473 "workload": "randread", 00:30:23.473 "status": "finished", 00:30:23.473 "queue_depth": 128, 00:30:23.473 "io_size": 4096, 00:30:23.473 "runtime": 2.003911, 00:30:23.473 "iops": 20862.702984314175, 00:30:23.473 "mibps": 81.49493353247725, 00:30:23.473 "io_failed": 0, 00:30:23.473 "io_timeout": 0, 00:30:23.473 "avg_latency_us": 6128.7878999529585, 00:30:23.473 "min_latency_us": 2703.36, 00:30:23.473 "max_latency_us": 18677.76 00:30:23.473 } 00:30:23.473 ], 00:30:23.473 "core_count": 1 00:30:23.473 } 00:30:23.473 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:23.473 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:23.473 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:23.473 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:23.473 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:23.473 | select(.opcode=="crc32c") 00:30:23.473 | "\(.module_name) \(.executed)"' 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3534260 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' -z 3534260 ']' 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # kill -0 3534260 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # uname 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3534260 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3534260' 00:30:23.734 killing process with pid 3534260 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # kill 3534260 00:30:23.734 Received shutdown signal, test time was about 2.000000 seconds 00:30:23.734 00:30:23.734 Latency(us) 00:30:23.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.734 =================================================================================================================== 00:30:23.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:23.734 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@977 -- # wait 3534260 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3535016 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3535016 /var/tmp/bperf.sock 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # '[' -z 3535016 ']' 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:23.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:23.995 09:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:23.995 [2024-10-07 09:51:23.543283] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:23.995 [2024-10-07 09:51:23.543339] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3535016 ] 00:30:23.995 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:23.995 Zero copy mechanism will not be used. 00:30:23.995 [2024-10-07 09:51:23.619952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.257 [2024-10-07 09:51:23.672160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.830 09:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:24.830 09:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@867 -- # return 0 00:30:24.830 09:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:24.830 09:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:24.830 09:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:25.091 09:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.091 09:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.352 nvme0n1 00:30:25.352 09:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:25.352 09:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:25.612 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:25.612 Zero copy mechanism will not be used. 00:30:25.612 Running I/O for 2 seconds... 00:30:27.588 4141.00 IOPS, 517.62 MiB/s 3606.50 IOPS, 450.81 MiB/s 00:30:27.588 Latency(us) 00:30:27.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.588 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:27.588 nvme0n1 : 2.00 3608.26 451.03 0.00 0.00 4431.79 785.07 13489.49 00:30:27.588 =================================================================================================================== 00:30:27.588 Total : 3608.26 451.03 0.00 0.00 4431.79 785.07 13489.49 00:30:27.588 { 00:30:27.588 "results": [ 00:30:27.588 { 00:30:27.588 "job": "nvme0n1", 00:30:27.588 "core_mask": "0x2", 00:30:27.588 "workload": "randread", 00:30:27.588 "status": "finished", 00:30:27.588 "queue_depth": 16, 00:30:27.588 "io_size": 131072, 00:30:27.588 "runtime": 2.003457, 00:30:27.588 "iops": 3608.2631172019164, 00:30:27.588 "mibps": 451.03288965023955, 00:30:27.588 "io_failed": 0, 00:30:27.588 "io_timeout": 0, 00:30:27.588 "avg_latency_us": 4431.788614377277, 00:30:27.588 "min_latency_us": 785.0666666666667, 00:30:27.588 "max_latency_us": 13489.493333333334 00:30:27.588 } 00:30:27.588 ], 00:30:27.588 "core_count": 1 00:30:27.588 } 00:30:27.588 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:27.588 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:27.588 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:27.588 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:27.588 | select(.opcode=="crc32c") 00:30:27.588 | "\(.module_name) \(.executed)"' 00:30:27.588 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3535016 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' -z 3535016 ']' 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # kill -0 3535016 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # uname 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3535016 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3535016' 00:30:27.869 killing process with pid 3535016 00:30:27.869 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # kill 3535016 00:30:27.869 Received shutdown signal, test time was about 2.000000 seconds 00:30:27.869 00:30:27.869 Latency(us) 00:30:27.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.870 =================================================================================================================== 00:30:27.870 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@977 -- # wait 3535016 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3535711 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3535711 /var/tmp/bperf.sock 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # '[' -z 3535711 ']' 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:27.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:27.870 09:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:27.870 [2024-10-07 09:51:27.522607] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:27.870 [2024-10-07 09:51:27.522665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3535711 ] 00:30:28.131 [2024-10-07 09:51:27.599170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.131 [2024-10-07 09:51:27.651991] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.704 09:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:28.704 09:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@867 -- # return 0 00:30:28.704 09:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:28.704 09:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:28.704 09:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:28.964 09:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.964 09:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.224 nvme0n1 00:30:29.224 09:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:29.224 09:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:29.485 Running I/O for 2 seconds... 00:30:31.370 30520.00 IOPS, 119.22 MiB/s 30578.00 IOPS, 119.45 MiB/s 00:30:31.370 Latency(us) 00:30:31.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.370 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.370 nvme0n1 : 2.01 30582.55 119.46 0.00 0.00 4180.22 2116.27 8683.52 00:30:31.370 =================================================================================================================== 00:30:31.370 Total : 30582.55 119.46 0.00 0.00 4180.22 2116.27 8683.52 00:30:31.370 { 00:30:31.370 "results": [ 00:30:31.370 { 00:30:31.370 "job": "nvme0n1", 00:30:31.370 "core_mask": "0x2", 00:30:31.370 "workload": "randwrite", 00:30:31.370 "status": "finished", 00:30:31.370 "queue_depth": 128, 00:30:31.370 "io_size": 4096, 00:30:31.370 "runtime": 2.006046, 00:30:31.370 "iops": 30582.548954510516, 00:30:31.370 "mibps": 119.4630818535567, 00:30:31.370 "io_failed": 0, 00:30:31.370 "io_timeout": 0, 00:30:31.370 "avg_latency_us": 4180.22359663135, 00:30:31.370 "min_latency_us": 2116.266666666667, 00:30:31.370 "max_latency_us": 8683.52 00:30:31.370 } 00:30:31.370 ], 00:30:31.370 "core_count": 1 00:30:31.370 } 00:30:31.370 09:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:31.370 09:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:31.370 09:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:31.370 09:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:31.370 | select(.opcode=="crc32c") 00:30:31.370 | "\(.module_name) \(.executed)"' 00:30:31.370 09:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3535711 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' -z 3535711 ']' 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # kill -0 3535711 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # uname 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3535711 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3535711' 00:30:31.632 killing process with pid 3535711 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # kill 3535711 00:30:31.632 Received shutdown signal, test time was about 2.000000 seconds 00:30:31.632 00:30:31.632 Latency(us) 00:30:31.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.632 =================================================================================================================== 00:30:31.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:31.632 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@977 -- # wait 3535711 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3536404 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3536404 /var/tmp/bperf.sock 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # '[' -z 3536404 ']' 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:31.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:31.894 09:51:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:31.894 [2024-10-07 09:51:31.387393] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:31.894 [2024-10-07 09:51:31.387448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3536404 ] 00:30:31.894 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:31.894 Zero copy mechanism will not be used. 00:30:31.894 [2024-10-07 09:51:31.465926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.894 [2024-10-07 09:51:31.519029] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.840 09:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:32.840 09:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@867 -- # return 0 00:30:32.840 09:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:32.840 09:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:32.840 09:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:32.840 09:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:32.840 09:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:33.101 nvme0n1 00:30:33.101 09:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:33.101 09:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:33.101 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:33.101 Zero copy mechanism will not be used. 00:30:33.101 Running I/O for 2 seconds... 00:30:35.429 6492.00 IOPS, 811.50 MiB/s 6815.00 IOPS, 851.88 MiB/s 00:30:35.429 Latency(us) 00:30:35.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.429 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:35.429 nvme0n1 : 2.00 6816.41 852.05 0.00 0.00 2344.21 1003.52 8956.59 00:30:35.429 =================================================================================================================== 00:30:35.429 Total : 6816.41 852.05 0.00 0.00 2344.21 1003.52 8956.59 00:30:35.429 { 00:30:35.429 "results": [ 00:30:35.429 { 00:30:35.429 "job": "nvme0n1", 00:30:35.429 "core_mask": "0x2", 00:30:35.429 "workload": "randwrite", 00:30:35.429 "status": "finished", 00:30:35.429 "queue_depth": 16, 00:30:35.429 "io_size": 131072, 00:30:35.429 "runtime": 2.002519, 00:30:35.429 "iops": 6816.41472565304, 00:30:35.429 "mibps": 852.05184070663, 00:30:35.429 "io_failed": 0, 00:30:35.429 "io_timeout": 0, 00:30:35.429 "avg_latency_us": 2344.2123174603175, 00:30:35.429 "min_latency_us": 1003.52, 00:30:35.429 "max_latency_us": 8956.586666666666 00:30:35.429 } 00:30:35.429 ], 00:30:35.429 "core_count": 1 00:30:35.429 } 00:30:35.429 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:35.429 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:35.429 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:35.429 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:35.429 | select(.opcode=="crc32c") 00:30:35.429 | "\(.module_name) \(.executed)"' 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3536404 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' -z 3536404 ']' 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # kill -0 3536404 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # uname 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:35.430 09:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3536404 00:30:35.430 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:30:35.430 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:30:35.430 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3536404' 00:30:35.430 killing process with pid 3536404 00:30:35.430 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # kill 3536404 00:30:35.430 Received shutdown signal, test time was about 2.000000 seconds 00:30:35.430 00:30:35.430 Latency(us) 00:30:35.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.430 =================================================================================================================== 00:30:35.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:35.430 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@977 -- # wait 3536404 00:30:35.691 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3533986 00:30:35.691 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' -z 3533986 ']' 00:30:35.691 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # kill -0 3533986 00:30:35.691 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # uname 00:30:35.691 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:35.691 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3533986 00:30:35.691 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:30:35.691 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:30:35.692 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3533986' 00:30:35.692 killing process with pid 3533986 00:30:35.692 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # kill 3533986 00:30:35.692 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@977 -- # wait 3533986 00:30:35.692 00:30:35.692 real 0m16.926s 00:30:35.692 user 0m33.507s 00:30:35.692 sys 0m3.758s 00:30:35.692 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # xtrace_disable 00:30:35.692 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:35.692 ************************************ 00:30:35.692 END TEST nvmf_digest_clean 00:30:35.692 ************************************ 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1110 -- # xtrace_disable 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:35.954 ************************************ 00:30:35.954 START TEST nvmf_digest_error 00:30:35.954 ************************************ 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # run_digest_error 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=3537376 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 3537376 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # '[' -z 3537376 ']' 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:35.954 09:51:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:35.954 [2024-10-07 09:51:35.476919] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:35.954 [2024-10-07 09:51:35.476980] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.954 [2024-10-07 09:51:35.563732] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.242 [2024-10-07 09:51:35.622430] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.242 [2024-10-07 09:51:35.622461] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.242 [2024-10-07 09:51:35.622467] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.242 [2024-10-07 09:51:35.622471] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.242 [2024-10-07 09:51:35.622476] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.242 [2024-10-07 09:51:35.622926] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@867 -- # return 0 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@733 -- # xtrace_disable 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.812 [2024-10-07 09:51:36.312824] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.812 null0 00:30:36.812 [2024-10-07 09:51:36.390828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.812 [2024-10-07 09:51:36.415020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3537450 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3537450 /var/tmp/bperf.sock 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # '[' -z 3537450 ']' 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:36.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:36.812 09:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.812 [2024-10-07 09:51:36.471987] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:36.812 [2024-10-07 09:51:36.472038] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3537450 ] 00:30:37.071 [2024-10-07 09:51:36.548368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.071 [2024-10-07 09:51:36.602133] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.640 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:37.640 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@867 -- # return 0 00:30:37.640 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:37.640 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:37.919 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:37.919 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:37.919 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:37.919 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:37.919 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:37.919 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:38.179 nvme0n1 00:30:38.439 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:38.439 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:38.439 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:38.439 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:38.439 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:38.439 09:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:38.439 Running I/O for 2 seconds... 00:30:38.439 [2024-10-07 09:51:37.967267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.439 [2024-10-07 09:51:37.967300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.439 [2024-10-07 09:51:37.967313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.439 [2024-10-07 09:51:37.978460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.439 [2024-10-07 09:51:37.978483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.439 [2024-10-07 09:51:37.978490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.439 [2024-10-07 09:51:37.989357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.439 [2024-10-07 09:51:37.989376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.439 [2024-10-07 09:51:37.989385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.439 [2024-10-07 09:51:38.000592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.439 [2024-10-07 09:51:38.000613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.439 [2024-10-07 09:51:38.000628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.439 [2024-10-07 09:51:38.012662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.439 [2024-10-07 09:51:38.012681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.439 [2024-10-07 09:51:38.012688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.439 [2024-10-07 09:51:38.022499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.439 [2024-10-07 09:51:38.022517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.439 [2024-10-07 09:51:38.022524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.439 [2024-10-07 09:51:38.034390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.439 [2024-10-07 09:51:38.034408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.439 [2024-10-07 09:51:38.034415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.439 [2024-10-07 09:51:38.046031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.439 [2024-10-07 09:51:38.046052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.439 [2024-10-07 09:51:38.046059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.439 [2024-10-07 09:51:38.054525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.440 [2024-10-07 09:51:38.054543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.440 [2024-10-07 09:51:38.054550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.440 [2024-10-07 09:51:38.065812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.440 [2024-10-07 09:51:38.065833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.440 [2024-10-07 09:51:38.065843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.440 [2024-10-07 09:51:38.077248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.440 [2024-10-07 09:51:38.077266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.440 [2024-10-07 09:51:38.077275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.440 [2024-10-07 09:51:38.086813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.440 [2024-10-07 09:51:38.086836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.440 [2024-10-07 09:51:38.086846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.440 [2024-10-07 09:51:38.094519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.440 [2024-10-07 09:51:38.094537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.440 [2024-10-07 09:51:38.094543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.103861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.103880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.103887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.114455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.114473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.114480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.122276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.122293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.122300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.131361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.131379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.131386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.141455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.141472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.141479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.151025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.151047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.151055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.160484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.160507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.160517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.168653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.168671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.168677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.177329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.177347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.177353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.186890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.186907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.186913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.195017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.195035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.195042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.204065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.204082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.204089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.212865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.212885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.212895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.221411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.221429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.221441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.230020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.230038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.230045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.238747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.238765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.238772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.248870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.248893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.248904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.258041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.258059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.258067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.267519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.267537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.267544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.275687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.275705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.275711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.285464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.700 [2024-10-07 09:51:38.285484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.700 [2024-10-07 09:51:38.285494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.700 [2024-10-07 09:51:38.293344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.701 [2024-10-07 09:51:38.293362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.701 [2024-10-07 09:51:38.293369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.701 [2024-10-07 09:51:38.302676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.701 [2024-10-07 09:51:38.302698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.701 [2024-10-07 09:51:38.302705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.701 [2024-10-07 09:51:38.312155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.701 [2024-10-07 09:51:38.312174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.701 [2024-10-07 09:51:38.312182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.701 [2024-10-07 09:51:38.320030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.701 [2024-10-07 09:51:38.320048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.701 [2024-10-07 09:51:38.320054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.701 [2024-10-07 09:51:38.329144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.701 [2024-10-07 09:51:38.329161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.701 [2024-10-07 09:51:38.329168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.701 [2024-10-07 09:51:38.338904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.701 [2024-10-07 09:51:38.338923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.701 [2024-10-07 09:51:38.338929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.701 [2024-10-07 09:51:38.347788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.701 [2024-10-07 09:51:38.347807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.701 [2024-10-07 09:51:38.347814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.701 [2024-10-07 09:51:38.356364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.701 [2024-10-07 09:51:38.356382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.701 [2024-10-07 09:51:38.356389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.960 [2024-10-07 09:51:38.366488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.960 [2024-10-07 09:51:38.366509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.960 [2024-10-07 09:51:38.366517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.960 [2024-10-07 09:51:38.375109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.960 [2024-10-07 09:51:38.375126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.960 [2024-10-07 09:51:38.375133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.960 [2024-10-07 09:51:38.383743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.960 [2024-10-07 09:51:38.383764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.960 [2024-10-07 09:51:38.383772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.960 [2024-10-07 09:51:38.392345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.960 [2024-10-07 09:51:38.392363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.960 [2024-10-07 09:51:38.392369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.960 [2024-10-07 09:51:38.402047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.960 [2024-10-07 09:51:38.402065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.960 [2024-10-07 09:51:38.402071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.960 [2024-10-07 09:51:38.410080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.960 [2024-10-07 09:51:38.410101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.960 [2024-10-07 09:51:38.410107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.960 [2024-10-07 09:51:38.420097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.960 [2024-10-07 09:51:38.420115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.960 [2024-10-07 09:51:38.420121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.960 [2024-10-07 09:51:38.429547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.960 [2024-10-07 09:51:38.429565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.960 [2024-10-07 09:51:38.429572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.960 [2024-10-07 09:51:38.437819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.960 [2024-10-07 09:51:38.437838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.960 [2024-10-07 09:51:38.437844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.960 [2024-10-07 09:51:38.446682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.960 [2024-10-07 09:51:38.446700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.960 [2024-10-07 09:51:38.446707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.455978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.455999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.456005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.465443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.465462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.465468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.473244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.473262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.473269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.482515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.482537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.482547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.491651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.491668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.491675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.499776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.499793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.499799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.509085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.509103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.509110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.520412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.520430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.520436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.532029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.532046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.532053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.540029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.540047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.540053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.551531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.551549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.551555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.560884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.560902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.560909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.569540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.569558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.569565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.580258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.580280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.580287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.592424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.592442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.592449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.600505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.600523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.600529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.612014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.612032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.612038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.961 [2024-10-07 09:51:38.621819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:38.961 [2024-10-07 09:51:38.621837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.961 [2024-10-07 09:51:38.621847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.631315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.631333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.631339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.639051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.639069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.639075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.652899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.652923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.652935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.660896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.660913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.660920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.670129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.670148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.670154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.680371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.680389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.680395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.691064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.691082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.691089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.701844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.701863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.701869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.712519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.712542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.712550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.720452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.720472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.720479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.730806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.730824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.730830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.739850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.739868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.739875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.749048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.749065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.749072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.758218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.758237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.758243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.766977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.766995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.767002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.775773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.775791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.775798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.784712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.784730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.784737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.792484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.792502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.792509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.802093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.802110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.802116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.812940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.812958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.812964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.823245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.823265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.823272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.835062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.835081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.835092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.843564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.843581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.221 [2024-10-07 09:51:38.843588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.221 [2024-10-07 09:51:38.854962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.221 [2024-10-07 09:51:38.854980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.222 [2024-10-07 09:51:38.854987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.222 [2024-10-07 09:51:38.866732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.222 [2024-10-07 09:51:38.866750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.222 [2024-10-07 09:51:38.866756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.222 [2024-10-07 09:51:38.878711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.222 [2024-10-07 09:51:38.878730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.222 [2024-10-07 09:51:38.878740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.890726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.890745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.890752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.900230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.900248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.900256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.907926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.907943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.907950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.917962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.917979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.917986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.928951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.928969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.928975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.937830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.937848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.937855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.946069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.946087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.946094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.954827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.954845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.954851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 26528.00 IOPS, 103.62 MiB/s [2024-10-07 09:51:38.963577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.963595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.963602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.973645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.973665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.973671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.982943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.982961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.982968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:38.992582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:38.992601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:38.992607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.002363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.002381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.002388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.012651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.012669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.012676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.021762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.021780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.021787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.029411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.029428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.029435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.039053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.039071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.039080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.050394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.050416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.050428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.061158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.061176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.061184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.070970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.070990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.071001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.082117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.082136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.082142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.089905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.089922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.089929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.100223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.100240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.100246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.110089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.110107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.110114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.119053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.119070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.119077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.128389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.128415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.482 [2024-10-07 09:51:39.128422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.482 [2024-10-07 09:51:39.137071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.482 [2024-10-07 09:51:39.137088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.483 [2024-10-07 09:51:39.137094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.743 [2024-10-07 09:51:39.145078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.743 [2024-10-07 09:51:39.145095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.743 [2024-10-07 09:51:39.145102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.743 [2024-10-07 09:51:39.156259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.743 [2024-10-07 09:51:39.156277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.743 [2024-10-07 09:51:39.156283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.743 [2024-10-07 09:51:39.166744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.743 [2024-10-07 09:51:39.166761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.743 [2024-10-07 09:51:39.166767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.743 [2024-10-07 09:51:39.177431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.743 [2024-10-07 09:51:39.177449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.743 [2024-10-07 09:51:39.177455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.743 [2024-10-07 09:51:39.185396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.743 [2024-10-07 09:51:39.185414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.743 [2024-10-07 09:51:39.185420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.743 [2024-10-07 09:51:39.195640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.743 [2024-10-07 09:51:39.195658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.743 [2024-10-07 09:51:39.195664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.743 [2024-10-07 09:51:39.204278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.743 [2024-10-07 09:51:39.204299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.743 [2024-10-07 09:51:39.204307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.743 [2024-10-07 09:51:39.212640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.743 [2024-10-07 09:51:39.212661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.743 [2024-10-07 09:51:39.212668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.222241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.222260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.222270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.230535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.230553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.230564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.238912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.238930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.238936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.249003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.249021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.249028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.258351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.258369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.258375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.267533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.267554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.267561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.275071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.275088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.275094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.284745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.284762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.284772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.293692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.293709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.293716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.301908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.301927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.301934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.310773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.310790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.310796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.322338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.322356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.322362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.333111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.333128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.333134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.341198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.341215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.341222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.350737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.350754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.350761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.362287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.362304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.362311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.374163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.374180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.374186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.384104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.384122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.384129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.744 [2024-10-07 09:51:39.394076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:39.744 [2024-10-07 09:51:39.394094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.744 [2024-10-07 09:51:39.394101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.005 [2024-10-07 09:51:39.405529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.005 [2024-10-07 09:51:39.405547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.005 [2024-10-07 09:51:39.405553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.005 [2024-10-07 09:51:39.413354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.005 [2024-10-07 09:51:39.413371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.005 [2024-10-07 09:51:39.413378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.422960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.422978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.422985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.431676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.431694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.431700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.440534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.440551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.440558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.450589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.450607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.450622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.460075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.460093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.460099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.467879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.467897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.467903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.477868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.477886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.477892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.486139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.486156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.486162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.494734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.494751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.494758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.504412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.504430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.504437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.515161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.515179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.515185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.526646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.526662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.526669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.536272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.536295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.536302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.545395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.545417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.545428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.553466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.553484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.553490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.564634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.564655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.564661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.575063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.575082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.575093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.583265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.583282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.583289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.592763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.592781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.592787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.603926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.603944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.603950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.614564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.614581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.614588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.626223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.626239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.626246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.636855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.636877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.636887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.644461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.644484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.644494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.654251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.654269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.654275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.006 [2024-10-07 09:51:39.662738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.006 [2024-10-07 09:51:39.662756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.006 [2024-10-07 09:51:39.662762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.267 [2024-10-07 09:51:39.672369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.267 [2024-10-07 09:51:39.672390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.267 [2024-10-07 09:51:39.672396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.267 [2024-10-07 09:51:39.682685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.267 [2024-10-07 09:51:39.682704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.267 [2024-10-07 09:51:39.682710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.267 [2024-10-07 09:51:39.690834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.267 [2024-10-07 09:51:39.690852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.267 [2024-10-07 09:51:39.690859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.267 [2024-10-07 09:51:39.701594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.267 [2024-10-07 09:51:39.701612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.267 [2024-10-07 09:51:39.701627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.267 [2024-10-07 09:51:39.711090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.267 [2024-10-07 09:51:39.711108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.267 [2024-10-07 09:51:39.711115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.267 [2024-10-07 09:51:39.719736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.267 [2024-10-07 09:51:39.719754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.267 [2024-10-07 09:51:39.719761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.267 [2024-10-07 09:51:39.729475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.267 [2024-10-07 09:51:39.729493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.267 [2024-10-07 09:51:39.729499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.267 [2024-10-07 09:51:39.738704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.267 [2024-10-07 09:51:39.738722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.267 [2024-10-07 09:51:39.738728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.267 [2024-10-07 09:51:39.747402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.747419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.747426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.756763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.756781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.756787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.765764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.765781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.765787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.774083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.774100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.774107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.782360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.782377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.782384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.793835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.793853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.793859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.804512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.804530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.804536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.813148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.813165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.813172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.823555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.823572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.823579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.832512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.832529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.832535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.841077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.841095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.841102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.850201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.850221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.850230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.858237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.858258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.858272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.867553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.867572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.867579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.877944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.877961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.877968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.889762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.889780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.889787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.897611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.897636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.897642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.908885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.908902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.908909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.268 [2024-10-07 09:51:39.921199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.268 [2024-10-07 09:51:39.921216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.268 [2024-10-07 09:51:39.921223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.528 [2024-10-07 09:51:39.930490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.528 [2024-10-07 09:51:39.930506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.528 [2024-10-07 09:51:39.930513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.528 [2024-10-07 09:51:39.940046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.528 [2024-10-07 09:51:39.940063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.528 [2024-10-07 09:51:39.940070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.528 [2024-10-07 09:51:39.951726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.528 [2024-10-07 09:51:39.951746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.528 [2024-10-07 09:51:39.951752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.528 26607.50 IOPS, 103.94 MiB/s [2024-10-07 09:51:39.959912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da4c30) 00:30:40.528 [2024-10-07 09:51:39.959929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.528 [2024-10-07 09:51:39.959936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.529 00:30:40.529 Latency(us) 00:30:40.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.529 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:40.529 nvme0n1 : 2.00 26623.84 104.00 0.00 0.00 4802.45 2239.15 16165.55 00:30:40.529 =================================================================================================================== 00:30:40.529 Total : 26623.84 104.00 0.00 0.00 4802.45 2239.15 16165.55 00:30:40.529 { 00:30:40.529 "results": [ 00:30:40.529 { 00:30:40.529 "job": "nvme0n1", 00:30:40.529 "core_mask": "0x2", 00:30:40.529 "workload": "randread", 00:30:40.529 "status": "finished", 00:30:40.529 "queue_depth": 128, 00:30:40.529 "io_size": 4096, 00:30:40.529 "runtime": 2.00358, 00:30:40.529 "iops": 26623.843320456384, 00:30:40.529 "mibps": 103.99938797053275, 00:30:40.529 "io_failed": 0, 00:30:40.529 "io_timeout": 0, 00:30:40.529 "avg_latency_us": 4802.452179542458, 00:30:40.529 "min_latency_us": 2239.1466666666665, 00:30:40.529 "max_latency_us": 16165.546666666667 00:30:40.529 } 00:30:40.529 ], 00:30:40.529 "core_count": 1 00:30:40.529 } 00:30:40.529 09:51:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:40.529 09:51:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:40.529 09:51:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:40.529 | .driver_specific 00:30:40.529 | .nvme_error 00:30:40.529 | .status_code 00:30:40.529 | .command_transient_transport_error' 00:30:40.529 09:51:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:40.529 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:30:40.529 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3537450 00:30:40.529 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' -z 3537450 ']' 00:30:40.529 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # kill -0 3537450 00:30:40.529 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # uname 00:30:40.529 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:40.529 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3537450 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3537450' 00:30:40.790 killing process with pid 3537450 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # kill 3537450 00:30:40.790 Received shutdown signal, test time was about 2.000000 seconds 00:30:40.790 00:30:40.790 Latency(us) 00:30:40.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.790 =================================================================================================================== 00:30:40.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@977 -- # wait 3537450 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3538253 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3538253 /var/tmp/bperf.sock 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # '[' -z 3538253 ']' 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:40.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:40.790 09:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:40.790 [2024-10-07 09:51:40.400532] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:40.790 [2024-10-07 09:51:40.400590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3538253 ] 00:30:40.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:40.790 Zero copy mechanism will not be used. 00:30:41.050 [2024-10-07 09:51:40.475878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.050 [2024-10-07 09:51:40.529041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.621 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:41.621 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@867 -- # return 0 00:30:41.621 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:41.621 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:41.881 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:41.881 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:41.881 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:41.881 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:41.881 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:41.881 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:42.141 nvme0n1 00:30:42.141 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:42.141 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:42.141 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:42.141 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:42.141 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:42.141 09:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:42.402 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:42.402 Zero copy mechanism will not be used. 00:30:42.402 Running I/O for 2 seconds... 00:30:42.402 [2024-10-07 09:51:41.845841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.845872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.845880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.852955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.852977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.852985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.863653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.863675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.863682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.874828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.874848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.874855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.884402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.884421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.884427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.895123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.895142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.895153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.906360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.906380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.906386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.916712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.916731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.916738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.929184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.929203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.929211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.941143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.941162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.941169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.947168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.947187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.947193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.951597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.951620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.951627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.955613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.955637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.955643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.962359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.962378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.962384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.966835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.966858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.966864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.973364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.402 [2024-10-07 09:51:41.973382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.402 [2024-10-07 09:51:41.973388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.402 [2024-10-07 09:51:41.979088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:41.979106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:41.979113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:41.990348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:41.990367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:41.990373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:41.996039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:41.996057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:41.996064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.000898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.000917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.000923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.005418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.005438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.005444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.012726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.012745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.012752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.020384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.020402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.020409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.029370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.029389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.029395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.039264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.039283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.039289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.044118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.044136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.044143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.048527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.048546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.048552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.052533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.052552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.052558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.057017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.057035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.057042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.403 [2024-10-07 09:51:42.061684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.403 [2024-10-07 09:51:42.061703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.403 [2024-10-07 09:51:42.061709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.066163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.066182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.066188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.071211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.071229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.071239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.075364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.075382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.075389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.079881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.079900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.079906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.087646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.087665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.087671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.096281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.096300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.096306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.100659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.100677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.100684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.108203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.108222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.108228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.115356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.115374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.115381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.119759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.119777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.119783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.127508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.127530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.127536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.133303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.133321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.133328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.137885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.137903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.137909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.145650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.145668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.145675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.152582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.152600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.152606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.159265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.159284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.159290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.163765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.163783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.664 [2024-10-07 09:51:42.163790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.664 [2024-10-07 09:51:42.168300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.664 [2024-10-07 09:51:42.168319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.168325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.175480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.175499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.175506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.182064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.182083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.182089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.187985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.188003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.188010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.196703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.196722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.196729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.203778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.203797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.203803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.209830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.209850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.209856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.219579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.219598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.219604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.227110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.227129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.227135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.236113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.236130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.236137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.240577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.240595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.240606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.247527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.247546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.247552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.252005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.252023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.252029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.258131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.258150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.258156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.266070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.266089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.266096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.270584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.270602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.270609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.278484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.278502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.278509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.286405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.286423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.286430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.290461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.290479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.290486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.296992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.297014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.297020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.306391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.306410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.306416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.665 [2024-10-07 09:51:42.318037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.665 [2024-10-07 09:51:42.318055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.665 [2024-10-07 09:51:42.318062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.926 [2024-10-07 09:51:42.327688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.926 [2024-10-07 09:51:42.327706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.926 [2024-10-07 09:51:42.327712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.926 [2024-10-07 09:51:42.338726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.926 [2024-10-07 09:51:42.338744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.926 [2024-10-07 09:51:42.338751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.926 [2024-10-07 09:51:42.350468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.926 [2024-10-07 09:51:42.350486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.926 [2024-10-07 09:51:42.350493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.926 [2024-10-07 09:51:42.355723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.926 [2024-10-07 09:51:42.355742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.926 [2024-10-07 09:51:42.355748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.926 [2024-10-07 09:51:42.364638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.926 [2024-10-07 09:51:42.364657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.926 [2024-10-07 09:51:42.364664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.926 [2024-10-07 09:51:42.369120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.926 [2024-10-07 09:51:42.369139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.926 [2024-10-07 09:51:42.369148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.926 [2024-10-07 09:51:42.376351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.926 [2024-10-07 09:51:42.376369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.926 [2024-10-07 09:51:42.376376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.384929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.384947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.384954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.389514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.389533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.389539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.397029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.397046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.397053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.403527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.403545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.403552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.410091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.410110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.410116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.419453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.419472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.419479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.427480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.427498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.427505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.432088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.432109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.432116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.438288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.438307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.438313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.442863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.442881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.442888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.453080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.453099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.453106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.462284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.462303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.462309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.472978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.472997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.473004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.481634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.481652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.481658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.492309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.492327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.492334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.499676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.499695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.499701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.509820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.509839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.509845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.521709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.521727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.521733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.529825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.529842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.529848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.533911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.533928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.533934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.543962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.543980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.543986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.555020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.555039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.555045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.565219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.927 [2024-10-07 09:51:42.565237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.927 [2024-10-07 09:51:42.565243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.927 [2024-10-07 09:51:42.575474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.928 [2024-10-07 09:51:42.575492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.928 [2024-10-07 09:51:42.575498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.928 [2024-10-07 09:51:42.585772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:42.928 [2024-10-07 09:51:42.585790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.928 [2024-10-07 09:51:42.585800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.596855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.596874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.596880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.608110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.608129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.608135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.620110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.620128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.620135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.632473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.632490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.632496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.644167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.644185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.644191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.653470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.653488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.653495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.663938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.663955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.663962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.672003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.672021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.672028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.683570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.683592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.683598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.694296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.694314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.694320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.704581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.704599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.704605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.713732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.713749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.713755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.723641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.723659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.723666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.732579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.732596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.732602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.740713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.740730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.740737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.749481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.749499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.749505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.755740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.755757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.755764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.763804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.763822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.763829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.771088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.771105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.771111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.780681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.780698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.189 [2024-10-07 09:51:42.780704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.189 [2024-10-07 09:51:42.791179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.189 [2024-10-07 09:51:42.791196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.190 [2024-10-07 09:51:42.791203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.190 [2024-10-07 09:51:42.802314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.190 [2024-10-07 09:51:42.802333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.190 [2024-10-07 09:51:42.802339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.190 [2024-10-07 09:51:42.814533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.190 [2024-10-07 09:51:42.814551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.190 [2024-10-07 09:51:42.814557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.190 [2024-10-07 09:51:42.826113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.190 [2024-10-07 09:51:42.826131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.190 [2024-10-07 09:51:42.826138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.190 [2024-10-07 09:51:42.836824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.190 [2024-10-07 09:51:42.836842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.190 [2024-10-07 09:51:42.836849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.190 3907.00 IOPS, 488.38 MiB/s [2024-10-07 09:51:42.849655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.190 [2024-10-07 09:51:42.849673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.190 [2024-10-07 09:51:42.849683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.451 [2024-10-07 09:51:42.860185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.451 [2024-10-07 09:51:42.860203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.451 [2024-10-07 09:51:42.860210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.451 [2024-10-07 09:51:42.871928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.451 [2024-10-07 09:51:42.871946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.451 [2024-10-07 09:51:42.871953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.451 [2024-10-07 09:51:42.882085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.451 [2024-10-07 09:51:42.882104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.451 [2024-10-07 09:51:42.882110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.451 [2024-10-07 09:51:42.893561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.451 [2024-10-07 09:51:42.893578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.451 [2024-10-07 09:51:42.893585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.451 [2024-10-07 09:51:42.905013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.451 [2024-10-07 09:51:42.905031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.451 [2024-10-07 09:51:42.905038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.451 [2024-10-07 09:51:42.915794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.451 [2024-10-07 09:51:42.915813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.451 [2024-10-07 09:51:42.915820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.451 [2024-10-07 09:51:42.927444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.451 [2024-10-07 09:51:42.927462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.451 [2024-10-07 09:51:42.927469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.451 [2024-10-07 09:51:42.938163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.451 [2024-10-07 09:51:42.938182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.451 [2024-10-07 09:51:42.938188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.451 [2024-10-07 09:51:42.949660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.451 [2024-10-07 09:51:42.949678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:42.949685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:42.959465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:42.959483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:42.959489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:42.970441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:42.970459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:42.970465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:42.980652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:42.980670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:42.980677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:42.991180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:42.991198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:42.991204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.000412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.000431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.000437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.010269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.010288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.010294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.019012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.019031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.019037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.027477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.027494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.027503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.035257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.035275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.035281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.044263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.044281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.044288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.054348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.054366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.054372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.066159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.066177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.066184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.076311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.076328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.076334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.087445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.087462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.087469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.096660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.096678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.096684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.452 [2024-10-07 09:51:43.107029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.452 [2024-10-07 09:51:43.107047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.452 [2024-10-07 09:51:43.107053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.116513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.116537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.116544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.121652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.121670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.121676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.130132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.130150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.130156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.140024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.140042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.140048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.150592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.150611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.150623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.159270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.159288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.159294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.168910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.168929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.168935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.175518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.175536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.175542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.185957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.185975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.185981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.195569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.195588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.195595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.206012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.206030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.206037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.217023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.217040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.217047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.226452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.226469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.226476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.237787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.237806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.237812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.249007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.714 [2024-10-07 09:51:43.249024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.714 [2024-10-07 09:51:43.249031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.714 [2024-10-07 09:51:43.261126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.261143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.261150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.715 [2024-10-07 09:51:43.272848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.272867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.272873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.715 [2024-10-07 09:51:43.285121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.285140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.285149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.715 [2024-10-07 09:51:43.296907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.296925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.296931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.715 [2024-10-07 09:51:43.308397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.308415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.308422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.715 [2024-10-07 09:51:43.318991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.319010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.319016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.715 [2024-10-07 09:51:43.329973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.329991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.329997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.715 [2024-10-07 09:51:43.337755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.337773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.337780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.715 [2024-10-07 09:51:43.348725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.348742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.348749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.715 [2024-10-07 09:51:43.357534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.357551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.357558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.715 [2024-10-07 09:51:43.367778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.715 [2024-10-07 09:51:43.367796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.715 [2024-10-07 09:51:43.367803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.977 [2024-10-07 09:51:43.376524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.977 [2024-10-07 09:51:43.376545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.977 [2024-10-07 09:51:43.376551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.977 [2024-10-07 09:51:43.385141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.977 [2024-10-07 09:51:43.385159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.977 [2024-10-07 09:51:43.385166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.977 [2024-10-07 09:51:43.394485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.977 [2024-10-07 09:51:43.394504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.977 [2024-10-07 09:51:43.394510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.977 [2024-10-07 09:51:43.403256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.977 [2024-10-07 09:51:43.403274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.977 [2024-10-07 09:51:43.403280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.977 [2024-10-07 09:51:43.412206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.977 [2024-10-07 09:51:43.412225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.977 [2024-10-07 09:51:43.412232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.977 [2024-10-07 09:51:43.421958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.977 [2024-10-07 09:51:43.421976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.977 [2024-10-07 09:51:43.421982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.977 [2024-10-07 09:51:43.433599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.977 [2024-10-07 09:51:43.433621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.977 [2024-10-07 09:51:43.433628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.977 [2024-10-07 09:51:43.443799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.977 [2024-10-07 09:51:43.443818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.977 [2024-10-07 09:51:43.443825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.977 [2024-10-07 09:51:43.455404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.977 [2024-10-07 09:51:43.455422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.977 [2024-10-07 09:51:43.455429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.977 [2024-10-07 09:51:43.466573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.977 [2024-10-07 09:51:43.466592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.466598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.478169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.478187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.478194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.488482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.488501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.488508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.499570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.499588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.499595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.511142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.511160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.511166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.518727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.518745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.518751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.530190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.530208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.530215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.541984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.542003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.542010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.554327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.554345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.554354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.566260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.566279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.566285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.578121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.578139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.578146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.589553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.589572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.589579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.601314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.601332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.601339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.613503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.613523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.613530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.625665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.625683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.625691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.978 [2024-10-07 09:51:43.637707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:43.978 [2024-10-07 09:51:43.637726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.978 [2024-10-07 09:51:43.637732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.647609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.647633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.647640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.656808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.656831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.656837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.664098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.664117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.664123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.673399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.673417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.673424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.684116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.684134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.684141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.695726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.695745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.695751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.706289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.706307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.706314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.717406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.717425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.717431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.727827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.727845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.727852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.738397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.738416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.738422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.748147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.748166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.748172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.758462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.758481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.758487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.769415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.769434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.769440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.780911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.780929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.780936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.793079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.793098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.793105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.805690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.805709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.805715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.817001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.817020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.817027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.828501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.828520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.828527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.240 [2024-10-07 09:51:43.840743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e937f0) 00:30:44.240 [2024-10-07 09:51:43.840765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.240 [2024-10-07 09:51:43.840771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.240 3443.00 IOPS, 430.38 MiB/s 00:30:44.240 Latency(us) 00:30:44.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.240 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:44.240 nvme0n1 : 2.00 3445.02 430.63 0.00 0.00 4641.68 969.39 12888.75 00:30:44.240 =================================================================================================================== 00:30:44.240 Total : 3445.02 430.63 0.00 0.00 4641.68 969.39 12888.75 00:30:44.240 { 00:30:44.240 "results": [ 00:30:44.240 { 00:30:44.240 "job": "nvme0n1", 00:30:44.240 "core_mask": "0x2", 00:30:44.240 "workload": "randread", 00:30:44.240 "status": "finished", 00:30:44.240 "queue_depth": 16, 00:30:44.240 "io_size": 131072, 00:30:44.240 "runtime": 2.003472, 00:30:44.240 "iops": 3445.019446241325, 00:30:44.241 "mibps": 430.62743078016564, 00:30:44.241 "io_failed": 0, 00:30:44.241 "io_timeout": 0, 00:30:44.241 "avg_latency_us": 4641.67934318555, 00:30:44.241 "min_latency_us": 969.3866666666667, 00:30:44.241 "max_latency_us": 12888.746666666666 00:30:44.241 } 00:30:44.241 ], 00:30:44.241 "core_count": 1 00:30:44.241 } 00:30:44.241 09:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:44.241 09:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:44.241 09:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:44.241 | .driver_specific 00:30:44.241 | .nvme_error 00:30:44.241 | .status_code 00:30:44.241 | .command_transient_transport_error' 00:30:44.241 09:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:44.501 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:30:44.501 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3538253 00:30:44.502 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' -z 3538253 ']' 00:30:44.502 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # kill -0 3538253 00:30:44.502 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # uname 00:30:44.502 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:44.502 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3538253 00:30:44.502 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:30:44.502 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:30:44.502 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3538253' 00:30:44.502 killing process with pid 3538253 00:30:44.502 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # kill 3538253 00:30:44.502 Received shutdown signal, test time was about 2.000000 seconds 00:30:44.502 00:30:44.502 Latency(us) 00:30:44.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.502 =================================================================================================================== 00:30:44.502 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.502 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@977 -- # wait 3538253 00:30:44.762 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3539106 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3539106 /var/tmp/bperf.sock 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # '[' -z 3539106 ']' 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:44.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:44.763 09:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:44.763 [2024-10-07 09:51:44.303432] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:44.763 [2024-10-07 09:51:44.303488] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539106 ] 00:30:44.763 [2024-10-07 09:51:44.378676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.023 [2024-10-07 09:51:44.431802] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.592 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:45.593 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@867 -- # return 0 00:30:45.593 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:45.593 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:45.593 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:45.853 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:45.853 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:45.853 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:45.853 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.853 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.853 nvme0n1 00:30:45.853 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:45.853 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:45.853 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:46.115 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:46.115 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:46.115 09:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:46.115 Running I/O for 2 seconds... 00:30:46.115 [2024-10-07 09:51:45.612603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e23b8 00:30:46.115 [2024-10-07 09:51:45.613581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.115 [2024-10-07 09:51:45.613608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:46.115 [2024-10-07 09:51:45.621350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eff18 00:30:46.115 [2024-10-07 09:51:45.622251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.115 [2024-10-07 09:51:45.622269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.115 [2024-10-07 09:51:45.629866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:46.115 [2024-10-07 09:51:45.630776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.115 [2024-10-07 09:51:45.630793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.115 [2024-10-07 09:51:45.638386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e4578 00:30:46.115 [2024-10-07 09:51:45.639297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.115 [2024-10-07 09:51:45.639314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.115 [2024-10-07 09:51:45.646890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eff18 00:30:46.115 [2024-10-07 09:51:45.647799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.115 [2024-10-07 09:51:45.647816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.115 [2024-10-07 09:51:45.655376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:46.115 [2024-10-07 09:51:45.656284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.115 [2024-10-07 09:51:45.656301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.115 [2024-10-07 09:51:45.663848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e4578 00:30:46.115 [2024-10-07 09:51:45.664734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.115 [2024-10-07 09:51:45.664749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.115 [2024-10-07 09:51:45.672327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eff18 00:30:46.115 [2024-10-07 09:51:45.673235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.115 [2024-10-07 09:51:45.673252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.115 [2024-10-07 09:51:45.680825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:46.116 [2024-10-07 09:51:45.681695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.681711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.689307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e4578 00:30:46.116 [2024-10-07 09:51:45.690215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.690231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.697767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eff18 00:30:46.116 [2024-10-07 09:51:45.698670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.698686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.706244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:46.116 [2024-10-07 09:51:45.707152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.707169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.714717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e4578 00:30:46.116 [2024-10-07 09:51:45.715591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.715608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.723177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eff18 00:30:46.116 [2024-10-07 09:51:45.724056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.724072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.731651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:46.116 [2024-10-07 09:51:45.732557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.732574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.740102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e4578 00:30:46.116 [2024-10-07 09:51:45.741014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.741030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.748579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eff18 00:30:46.116 [2024-10-07 09:51:45.749481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.749497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.757058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:46.116 [2024-10-07 09:51:45.757957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.757973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.765519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e4578 00:30:46.116 [2024-10-07 09:51:45.766386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.766402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.116 [2024-10-07 09:51:45.774017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eff18 00:30:46.116 [2024-10-07 09:51:45.774893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.116 [2024-10-07 09:51:45.774910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.782470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:46.378 [2024-10-07 09:51:45.783384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.783401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.790946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e4578 00:30:46.378 [2024-10-07 09:51:45.791813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.791829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.799389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eff18 00:30:46.378 [2024-10-07 09:51:45.800301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.800317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.807854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:46.378 [2024-10-07 09:51:45.808764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.808780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.816305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e4578 00:30:46.378 [2024-10-07 09:51:45.817210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.817230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.824781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eff18 00:30:46.378 [2024-10-07 09:51:45.825681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.825697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.833223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:46.378 [2024-10-07 09:51:45.834130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.834146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.842838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e4578 00:30:46.378 [2024-10-07 09:51:45.844248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.844264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.850800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ea680 00:30:46.378 [2024-10-07 09:51:45.851894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.851910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.859140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f8618 00:30:46.378 [2024-10-07 09:51:45.860231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.860246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.867766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f96f8 00:30:46.378 [2024-10-07 09:51:45.868839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.868856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.876196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fa7d8 00:30:46.378 [2024-10-07 09:51:45.877294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.877310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.884637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fe2e8 00:30:46.378 [2024-10-07 09:51:45.885703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.378 [2024-10-07 09:51:45.885719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.378 [2024-10-07 09:51:45.893080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198df118 00:30:46.378 [2024-10-07 09:51:45.894171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.894187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.901518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fdeb0 00:30:46.379 [2024-10-07 09:51:45.902611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.902630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.909965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fc560 00:30:46.379 [2024-10-07 09:51:45.911055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.911071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.918407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fb480 00:30:46.379 [2024-10-07 09:51:45.919498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.919513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.926829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e3498 00:30:46.379 [2024-10-07 09:51:45.927934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.927950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.935272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f0788 00:30:46.379 [2024-10-07 09:51:45.936379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.936395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.943732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ef6a8 00:30:46.379 [2024-10-07 09:51:45.944849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.944864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.952182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f4b08 00:30:46.379 [2024-10-07 09:51:45.953310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.953326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.960625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ed4e8 00:30:46.379 [2024-10-07 09:51:45.961692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.961708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.969044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ec408 00:30:46.379 [2024-10-07 09:51:45.970126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.970143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.977483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f1868 00:30:46.379 [2024-10-07 09:51:45.978579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.978595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.985949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ea248 00:30:46.379 [2024-10-07 09:51:45.987061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.987078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:45.994390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f7970 00:30:46.379 [2024-10-07 09:51:45.995490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:45.995505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:46.002816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f8a50 00:30:46.379 [2024-10-07 09:51:46.003909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:46.003925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:46.011252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f9b30 00:30:46.379 [2024-10-07 09:51:46.012356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:46.012372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:46.019682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fac10 00:30:46.379 [2024-10-07 09:51:46.020775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:46.020791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:46.028119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fd640 00:30:46.379 [2024-10-07 09:51:46.029218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:46.029233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.379 [2024-10-07 09:51:46.036560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ff3c8 00:30:46.379 [2024-10-07 09:51:46.037648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.379 [2024-10-07 09:51:46.037667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.641 [2024-10-07 09:51:46.044996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fda78 00:30:46.641 [2024-10-07 09:51:46.046088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.641 [2024-10-07 09:51:46.046104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.641 [2024-10-07 09:51:46.053457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fc128 00:30:46.641 [2024-10-07 09:51:46.054562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.641 [2024-10-07 09:51:46.054578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.641 [2024-10-07 09:51:46.061886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e27f0 00:30:46.641 [2024-10-07 09:51:46.062988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.641 [2024-10-07 09:51:46.063004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.641 [2024-10-07 09:51:46.070330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f1430 00:30:46.641 [2024-10-07 09:51:46.071418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.641 [2024-10-07 09:51:46.071435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.641 [2024-10-07 09:51:46.078768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f0350 00:30:46.641 [2024-10-07 09:51:46.079818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.641 [2024-10-07 09:51:46.079834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.641 [2024-10-07 09:51:46.087207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ef270 00:30:46.641 [2024-10-07 09:51:46.088313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.641 [2024-10-07 09:51:46.088328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.641 [2024-10-07 09:51:46.095631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ee190 00:30:46.641 [2024-10-07 09:51:46.096681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.641 [2024-10-07 09:51:46.096697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.641 [2024-10-07 09:51:46.104045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ed0b0 00:30:46.641 [2024-10-07 09:51:46.105142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.641 [2024-10-07 09:51:46.105157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.641 [2024-10-07 09:51:46.112449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e4140 00:30:46.641 [2024-10-07 09:51:46.113549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.113568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.120903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e95a0 00:30:46.642 [2024-10-07 09:51:46.122009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.122024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.129343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ea680 00:30:46.642 [2024-10-07 09:51:46.130431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.130447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.137771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f8618 00:30:46.642 [2024-10-07 09:51:46.138844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.138860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.146176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f96f8 00:30:46.642 [2024-10-07 09:51:46.147261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.147277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.154588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fa7d8 00:30:46.642 [2024-10-07 09:51:46.155663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.155679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.163023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fe2e8 00:30:46.642 [2024-10-07 09:51:46.164125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.164141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.171465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198df118 00:30:46.642 [2024-10-07 09:51:46.172574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.172590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.179891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fdeb0 00:30:46.642 [2024-10-07 09:51:46.180989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.181005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.188322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fc560 00:30:46.642 [2024-10-07 09:51:46.189405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.189421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.196766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fb480 00:30:46.642 [2024-10-07 09:51:46.197877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.197893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.205180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e3498 00:30:46.642 [2024-10-07 09:51:46.206288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.206303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.213640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f0788 00:30:46.642 [2024-10-07 09:51:46.214742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.214758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.222067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ef6a8 00:30:46.642 [2024-10-07 09:51:46.223159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.223174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.230516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f4b08 00:30:46.642 [2024-10-07 09:51:46.231606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.231624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.238940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ed4e8 00:30:46.642 [2024-10-07 09:51:46.239990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.240006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.247351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ec408 00:30:46.642 [2024-10-07 09:51:46.248446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.248462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.255775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f1868 00:30:46.642 [2024-10-07 09:51:46.256873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.256888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.264209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ea248 00:30:46.642 [2024-10-07 09:51:46.265299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.265314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.272632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f7970 00:30:46.642 [2024-10-07 09:51:46.273681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.273698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.281045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f8a50 00:30:46.642 [2024-10-07 09:51:46.282147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.282162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.289490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f9b30 00:30:46.642 [2024-10-07 09:51:46.290586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.290601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.642 [2024-10-07 09:51:46.297932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fac10 00:30:46.642 [2024-10-07 09:51:46.299030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.642 [2024-10-07 09:51:46.299046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.904 [2024-10-07 09:51:46.306373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fd640 00:30:46.904 [2024-10-07 09:51:46.307331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.904 [2024-10-07 09:51:46.307346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.904 [2024-10-07 09:51:46.315079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eb328 00:30:46.904 [2024-10-07 09:51:46.316304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.904 [2024-10-07 09:51:46.316320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.904 [2024-10-07 09:51:46.322311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198feb58 00:30:46.904 [2024-10-07 09:51:46.323171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.904 [2024-10-07 09:51:46.323186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:46.904 [2024-10-07 09:51:46.330696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f6890 00:30:46.904 [2024-10-07 09:51:46.331544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.904 [2024-10-07 09:51:46.331562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:46.904 [2024-10-07 09:51:46.339113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198feb58 00:30:46.904 [2024-10-07 09:51:46.339947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.904 [2024-10-07 09:51:46.339962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:46.904 [2024-10-07 09:51:46.347573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f6890 00:30:46.904 [2024-10-07 09:51:46.348415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.904 [2024-10-07 09:51:46.348430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:46.904 [2024-10-07 09:51:46.356137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f35f0 00:30:46.904 [2024-10-07 09:51:46.356987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.904 [2024-10-07 09:51:46.357002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.904 [2024-10-07 09:51:46.364565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f2510 00:30:46.904 [2024-10-07 09:51:46.365434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.904 [2024-10-07 09:51:46.365450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.373008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e5a90 00:30:46.905 [2024-10-07 09:51:46.373828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.373844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.381424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e01f8 00:30:46.905 [2024-10-07 09:51:46.382276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.382291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.389867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f5be8 00:30:46.905 [2024-10-07 09:51:46.390697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.390713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.398287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fe2e8 00:30:46.905 [2024-10-07 09:51:46.399145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.399160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.406714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198df118 00:30:46.905 [2024-10-07 09:51:46.407584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.407600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.415136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f6890 00:30:46.905 [2024-10-07 09:51:46.415985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.416001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.423552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fa3a0 00:30:46.905 [2024-10-07 09:51:46.424413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.424429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.431963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ea680 00:30:46.905 [2024-10-07 09:51:46.432784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.432799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.440391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eb328 00:30:46.905 [2024-10-07 09:51:46.441263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.441278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.448838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ee5c8 00:30:46.905 [2024-10-07 09:51:46.449653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.449669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.457257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e6fa8 00:30:46.905 [2024-10-07 09:51:46.458111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.458126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.465659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e8088 00:30:46.905 [2024-10-07 09:51:46.466505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.466521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.474059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e5220 00:30:46.905 [2024-10-07 09:51:46.474905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.474921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.482471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e1710 00:30:46.905 [2024-10-07 09:51:46.483328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.483344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.491035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f4f40 00:30:46.905 [2024-10-07 09:51:46.491906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.491921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.499476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f31b8 00:30:46.905 [2024-10-07 09:51:46.500343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.500358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.507893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f20d8 00:30:46.905 [2024-10-07 09:51:46.508737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.508752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.516304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e1b48 00:30:46.905 [2024-10-07 09:51:46.517168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.517184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.524720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e0630 00:30:46.905 [2024-10-07 09:51:46.525578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.525593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.533158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f6020 00:30:46.905 [2024-10-07 09:51:46.534026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.534041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.541584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fd640 00:30:46.905 [2024-10-07 09:51:46.542451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.542466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.550160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f7538 00:30:46.905 [2024-10-07 09:51:46.550984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.551001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:46.905 [2024-10-07 09:51:46.558587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198fb048 00:30:46.905 [2024-10-07 09:51:46.559450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.905 [2024-10-07 09:51:46.559466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.567008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f9f68 00:30:47.167 [2024-10-07 09:51:46.567824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.567839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.575429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaef0 00:30:47.167 [2024-10-07 09:51:46.576281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.576297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.583855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198ebfd0 00:30:47.167 [2024-10-07 09:51:46.584681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.584697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.592277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e6b70 00:30:47.167 [2024-10-07 09:51:46.593131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.593146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.600697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198e7c50 00:30:47.167 [2024-10-07 09:51:46.601885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.601900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:47.167 30056.00 IOPS, 117.41 MiB/s [2024-10-07 09:51:46.609406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f92c0 00:30:47.167 [2024-10-07 09:51:46.610367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.610382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.617145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.617953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.617969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.625727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198f0788 00:30:47.167 [2024-10-07 09:51:46.626577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.626592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.634336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198edd58 00:30:47.167 [2024-10-07 09:51:46.635159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.635175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.643264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.643569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.643585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.651938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.652266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.652282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.660610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.660982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.660998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.669352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.669625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.669641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.678055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.678356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.678371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.686772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.687093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.687109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.695441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.695796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.695811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.704135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.704424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.704440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.712860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.713134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.713149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.721508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.721803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.721819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.730192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.730478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.730493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.738895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.739159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.739182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.747563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.747863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.747878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.756219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.167 [2024-10-07 09:51:46.756500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.167 [2024-10-07 09:51:46.756514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.167 [2024-10-07 09:51:46.764960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.168 [2024-10-07 09:51:46.765201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.168 [2024-10-07 09:51:46.765216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.168 [2024-10-07 09:51:46.773641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.168 [2024-10-07 09:51:46.773881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.168 [2024-10-07 09:51:46.773899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.168 [2024-10-07 09:51:46.782334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.168 [2024-10-07 09:51:46.782621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.168 [2024-10-07 09:51:46.782637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.168 [2024-10-07 09:51:46.790997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.168 [2024-10-07 09:51:46.791286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.168 [2024-10-07 09:51:46.791302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.168 [2024-10-07 09:51:46.799672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.168 [2024-10-07 09:51:46.799955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.168 [2024-10-07 09:51:46.799970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.168 [2024-10-07 09:51:46.808347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.168 [2024-10-07 09:51:46.808656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.168 [2024-10-07 09:51:46.808672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.168 [2024-10-07 09:51:46.817018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.168 [2024-10-07 09:51:46.817277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.168 [2024-10-07 09:51:46.817293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.168 [2024-10-07 09:51:46.825698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.168 [2024-10-07 09:51:46.825996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.168 [2024-10-07 09:51:46.826012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.834380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.834678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.834695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.843019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.843336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.843352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.851739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.852012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.852028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.860426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.860740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.860756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.869245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.869564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.869580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.877983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.878258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.878273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.886657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.886921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.886937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.895339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.895611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.895629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.904012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.904305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.904321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.912708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.913015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.913030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.921365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.921657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.921676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.930072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.930404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.930420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.938796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.939096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.939112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.947514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.947815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.947831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.956253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.956557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.956572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.964931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.965239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.965255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.973685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.973983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.973999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.982373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.982660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.982676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.991144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:46.991418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:46.991434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:46.999842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:47.000132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:47.000148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:47.008549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:47.008844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:47.008860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:47.017206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:47.017554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:47.017569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.430 [2024-10-07 09:51:47.025927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.430 [2024-10-07 09:51:47.026236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.430 [2024-10-07 09:51:47.026251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.431 [2024-10-07 09:51:47.034583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.431 [2024-10-07 09:51:47.034882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.431 [2024-10-07 09:51:47.034898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.431 [2024-10-07 09:51:47.043267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.431 [2024-10-07 09:51:47.043558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.431 [2024-10-07 09:51:47.043580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.431 [2024-10-07 09:51:47.051956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.431 [2024-10-07 09:51:47.052259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.431 [2024-10-07 09:51:47.052275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.431 [2024-10-07 09:51:47.060684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.431 [2024-10-07 09:51:47.060923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.431 [2024-10-07 09:51:47.060939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.431 [2024-10-07 09:51:47.069328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.431 [2024-10-07 09:51:47.069633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.431 [2024-10-07 09:51:47.069649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.431 [2024-10-07 09:51:47.077990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.431 [2024-10-07 09:51:47.078275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.431 [2024-10-07 09:51:47.078291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.431 [2024-10-07 09:51:47.086703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.431 [2024-10-07 09:51:47.087001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.431 [2024-10-07 09:51:47.087017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.693 [2024-10-07 09:51:47.095424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.693 [2024-10-07 09:51:47.095727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.693 [2024-10-07 09:51:47.095743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.693 [2024-10-07 09:51:47.104142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.693 [2024-10-07 09:51:47.104434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.693 [2024-10-07 09:51:47.104450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.112846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.113197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.113213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.121529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.121899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.121915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.130239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.130587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.130603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.138920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.139223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.139239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.147634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.147908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.147926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.156288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.156572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.156589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.165009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.165303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.165319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.173665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.173952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.173967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.182359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.182747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.182763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.191097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.191419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.191434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.199810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.200100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.200116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.208488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.208775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.208791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.217165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.217429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.217445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.225917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.226216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.226232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.234604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.234897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.234913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.243287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.243645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.243660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.251969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.252258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.252274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.260610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.260831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.260846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.269300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.269508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.269523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.278012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.278322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.278338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.286767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.287041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.287057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.295447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.295799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.295815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.304133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.304435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.304450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.312786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.313147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.313163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.321480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.321774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.321797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.330262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.330533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.694 [2024-10-07 09:51:47.330548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.694 [2024-10-07 09:51:47.338970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.694 [2024-10-07 09:51:47.339235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.695 [2024-10-07 09:51:47.339258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.695 [2024-10-07 09:51:47.347655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.695 [2024-10-07 09:51:47.347919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.695 [2024-10-07 09:51:47.347935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.356327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.356636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.356652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.364995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.365297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.365313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.373715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.374000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.374026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.382448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.382768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.382790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.391118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.391405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.391421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.399813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.399964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.399980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.408482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.408786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.408803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.417253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.417543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.417559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.425971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.426274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.426289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.434671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.435059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.435075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.443387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.443678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.443694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.452067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.452362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.452378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.460816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.461112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.461128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.469494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.469786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.469803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.478201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.478502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.478518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.486932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.487204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.487220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.495710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.495968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.495983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.504404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.504552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.504567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.957 [2024-10-07 09:51:47.513177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.957 [2024-10-07 09:51:47.513485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.957 [2024-10-07 09:51:47.513501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 [2024-10-07 09:51:47.521876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.958 [2024-10-07 09:51:47.522172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.958 [2024-10-07 09:51:47.522188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 [2024-10-07 09:51:47.530578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.958 [2024-10-07 09:51:47.530948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.958 [2024-10-07 09:51:47.530964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 [2024-10-07 09:51:47.539288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.958 [2024-10-07 09:51:47.539655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.958 [2024-10-07 09:51:47.539671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 [2024-10-07 09:51:47.547946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.958 [2024-10-07 09:51:47.548243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.958 [2024-10-07 09:51:47.548259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 [2024-10-07 09:51:47.556586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.958 [2024-10-07 09:51:47.556845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.958 [2024-10-07 09:51:47.556861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 [2024-10-07 09:51:47.565324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.958 [2024-10-07 09:51:47.565608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.958 [2024-10-07 09:51:47.565629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 [2024-10-07 09:51:47.573964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.958 [2024-10-07 09:51:47.574247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.958 [2024-10-07 09:51:47.574264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 [2024-10-07 09:51:47.582653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.958 [2024-10-07 09:51:47.582947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.958 [2024-10-07 09:51:47.582963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 [2024-10-07 09:51:47.591317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.958 [2024-10-07 09:51:47.591624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.958 [2024-10-07 09:51:47.591640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 [2024-10-07 09:51:47.600049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3c20) with pdu=0x2000198eaab8 00:30:47.958 [2024-10-07 09:51:47.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.958 [2024-10-07 09:51:47.600359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.958 29747.00 IOPS, 116.20 MiB/s 00:30:47.958 Latency(us) 00:30:47.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.958 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.958 nvme0n1 : 2.00 29739.89 116.17 0.00 0.00 4297.08 2129.92 12014.93 00:30:47.958 =================================================================================================================== 00:30:47.958 Total : 29739.89 116.17 0.00 0.00 4297.08 2129.92 12014.93 00:30:47.958 { 00:30:47.958 "results": [ 00:30:47.958 { 00:30:47.958 "job": "nvme0n1", 00:30:47.958 "core_mask": "0x2", 00:30:47.958 "workload": "randwrite", 00:30:47.958 "status": "finished", 00:30:47.958 "queue_depth": 128, 00:30:47.958 "io_size": 4096, 00:30:47.958 "runtime": 2.004244, 00:30:47.958 "iops": 29739.89194928362, 00:30:47.958 "mibps": 116.17145292688915, 00:30:47.958 "io_failed": 0, 00:30:47.958 "io_timeout": 0, 00:30:47.958 "avg_latency_us": 4297.084880493016, 00:30:47.958 "min_latency_us": 2129.92, 00:30:47.958 "max_latency_us": 12014.933333333332 00:30:47.958 } 00:30:47.958 ], 00:30:47.958 "core_count": 1 00:30:47.958 } 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:48.219 | .driver_specific 00:30:48.219 | .nvme_error 00:30:48.219 | .status_code 00:30:48.219 | .command_transient_transport_error' 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 233 > 0 )) 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3539106 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' -z 3539106 ']' 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # kill -0 3539106 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # uname 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3539106 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3539106' 00:30:48.219 killing process with pid 3539106 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # kill 3539106 00:30:48.219 Received shutdown signal, test time was about 2.000000 seconds 00:30:48.219 00:30:48.219 Latency(us) 00:30:48.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.219 =================================================================================================================== 00:30:48.219 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.219 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@977 -- # wait 3539106 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3539826 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3539826 /var/tmp/bperf.sock 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # '[' -z 3539826 ']' 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:48.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:48.480 09:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:48.480 [2024-10-07 09:51:48.042195] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:48.480 [2024-10-07 09:51:48.042252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539826 ] 00:30:48.480 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:48.480 Zero copy mechanism will not be used. 00:30:48.480 [2024-10-07 09:51:48.117242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.739 [2024-10-07 09:51:48.170466] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.309 09:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:49.309 09:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@867 -- # return 0 00:30:49.309 09:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:49.309 09:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:49.569 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:49.569 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:49.569 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:49.569 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:49.569 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.569 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.831 nvme0n1 00:30:49.831 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:49.831 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:49.831 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:49.831 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:49.831 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:49.831 09:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:49.831 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:49.831 Zero copy mechanism will not be used. 00:30:49.831 Running I/O for 2 seconds... 00:30:49.831 [2024-10-07 09:51:49.399346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.399561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.399588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.402710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.402908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.402928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.406021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.406216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.406233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.409356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.409548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.409565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.412712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.413017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.413035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.416740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.416938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.416955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.420318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.420510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.420527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.423791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.423991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.424008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.426992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.427184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.427201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.430155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.430346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.430363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.433418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.433608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.433631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.439577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.439889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.439907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.443745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.443938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.443955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.447086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.447408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.447426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.454191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.454384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.454401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.457743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.457937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.457958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.461571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.461778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.461795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.465249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.465441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.465458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.468524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.468722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.468739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.471798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.471991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.472008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.475206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.475397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.831 [2024-10-07 09:51:49.475414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.831 [2024-10-07 09:51:49.478490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.831 [2024-10-07 09:51:49.478687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.832 [2024-10-07 09:51:49.478703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.832 [2024-10-07 09:51:49.481748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.832 [2024-10-07 09:51:49.481939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.832 [2024-10-07 09:51:49.481955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.832 [2024-10-07 09:51:49.485013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.832 [2024-10-07 09:51:49.485205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.832 [2024-10-07 09:51:49.485221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.832 [2024-10-07 09:51:49.488234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.832 [2024-10-07 09:51:49.488429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.832 [2024-10-07 09:51:49.488446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.832 [2024-10-07 09:51:49.491795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:49.832 [2024-10-07 09:51:49.491989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.832 [2024-10-07 09:51:49.492006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.094 [2024-10-07 09:51:49.495067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.094 [2024-10-07 09:51:49.495259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.094 [2024-10-07 09:51:49.495275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.094 [2024-10-07 09:51:49.498571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.094 [2024-10-07 09:51:49.498768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.094 [2024-10-07 09:51:49.498784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.094 [2024-10-07 09:51:49.501936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.094 [2024-10-07 09:51:49.502129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.094 [2024-10-07 09:51:49.502146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.094 [2024-10-07 09:51:49.505136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.094 [2024-10-07 09:51:49.505327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.094 [2024-10-07 09:51:49.505344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.094 [2024-10-07 09:51:49.508348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.094 [2024-10-07 09:51:49.508540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.094 [2024-10-07 09:51:49.508557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.094 [2024-10-07 09:51:49.511691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.094 [2024-10-07 09:51:49.511884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.094 [2024-10-07 09:51:49.511900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.094 [2024-10-07 09:51:49.515237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.094 [2024-10-07 09:51:49.515427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.094 [2024-10-07 09:51:49.515444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.518456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.518652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.518669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.521717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.521908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.521924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.524906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.525209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.525227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.528398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.528591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.528607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.531943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.532043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.532059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.537729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.537921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.537938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.540945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.541137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.541153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.547075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.547376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.547394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.551600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.551790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.551810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.555513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.555708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.555725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.559967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.560169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.560185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.564073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.564262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.564279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.570626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.570820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.570837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.574205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.574398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.574415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.579875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.580186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.580203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.584590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.584787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.584804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.587816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.588007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.588023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.591047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.591240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.591257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.594273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.594463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.594479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.597501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.597697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.597713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.602790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.603004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.603020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.609812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.610004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.610020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.614060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.614259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.614276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.618827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.619019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.619036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.622789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.622835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.622850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.628433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.628630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.628647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.632661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.095 [2024-10-07 09:51:49.632852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.095 [2024-10-07 09:51:49.632869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.095 [2024-10-07 09:51:49.636964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.637156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.637173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.644276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.644347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.644362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.648251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.648443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.648459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.651432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.651629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.651646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.654614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.654815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.654831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.660073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.660275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.660291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.663947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.664138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.664155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.669810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.670003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.670023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.673423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.673614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.673635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.677529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.677724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.677740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.681362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.681544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.681561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.685452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.685650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.685666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.689331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.689522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.689539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.692580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.692775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.692792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.695739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.695930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.695947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.698930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.699121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.699137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.705131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.705429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.705446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.709794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.709997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.710013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.714400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.714732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.714748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.718696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.718889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.718906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.723775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.723976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.723993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.730475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.730704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.730720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.735235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.735428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.735444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.740049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.740240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.740257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.096 [2024-10-07 09:51:49.745691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.096 [2024-10-07 09:51:49.745883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.096 [2024-10-07 09:51:49.745902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.755454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.755681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.755698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.765062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.765370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.765387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.774125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.774342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.774358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.782304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.782592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.782608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.786152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.786343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.786359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.789533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.789852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.789870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.793006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.793195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.793211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.796477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.796673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.796689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.800611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.800814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.800831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.805871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.806172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.806190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.809370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.809561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.809578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.812865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.812959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.812974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.819949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.820143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.820159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.359 [2024-10-07 09:51:49.823556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.359 [2024-10-07 09:51:49.823752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.359 [2024-10-07 09:51:49.823768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.827871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.828072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.828088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.832379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.832570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.832586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.841452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.841656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.841672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.849821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.850071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.850088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.858233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.858548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.858565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.864812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.865006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.865023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.868015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.868206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.868222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.871174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.871366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.871382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.874335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.874525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.874542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.877588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.877786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.877803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.880847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.881039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.881056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.884098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.884290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.884309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.887396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.887597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.887613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.891058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.891249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.891266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.894310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.894500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.894517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.897486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.897680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.897696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.900654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.900846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.900862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.903883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.904064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.904080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.907125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.907316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.907332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.910371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.910560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.910577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.913637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.913830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.913846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.917405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.917597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.917613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.922409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.922600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.922621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.926288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.926489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.926505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.932098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.932171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.932187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.936471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.936516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.936531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.360 [2024-10-07 09:51:49.941332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.360 [2024-10-07 09:51:49.941522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.360 [2024-10-07 09:51:49.941539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.948572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.948631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.948646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.954316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.954651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.954668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.958820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.958864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.958879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.963545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.963870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.963887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.968997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.969190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.969206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.973125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.973313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.973330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.979748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.979940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.979956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.984957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.985159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.985176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.988221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.988412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.988428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.991453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.991647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.991663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:49.994630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:49.994821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:49.994840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:50.000847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:50.001040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:50.001056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:50.004595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:50.004795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:50.004813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:50.007880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:50.008070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:50.008087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:50.011335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:50.011525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:50.011541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.361 [2024-10-07 09:51:50.018119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.361 [2024-10-07 09:51:50.018313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.361 [2024-10-07 09:51:50.018330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.623 [2024-10-07 09:51:50.021557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.623 [2024-10-07 09:51:50.021605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.623 [2024-10-07 09:51:50.021628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.623 [2024-10-07 09:51:50.027789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.027984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.028001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.031811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.032002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.032018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.035580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.035778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.035795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.039362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.039563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.039579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.042891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.043080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.043097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.046392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.046581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.046597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.049999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.050193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.050212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.054237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.054427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.054445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.060163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.060497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.060515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.065642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.065837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.065855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.070556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.070871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.070888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.074202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.074526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.074543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.077756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.077949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.077966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.081297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.081626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.081643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.088820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.089121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.089138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.092519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.092848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.092865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.096065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.096127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.096145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.100244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.100435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.100452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.103881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.104187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.104204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.107568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.107763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.107784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.110894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.111085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.111102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.115204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.115515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.115533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.119259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.119452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.119469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.122453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.122652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.122669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.125754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.125946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.125963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.132725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.133048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.133064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.624 [2024-10-07 09:51:50.137788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.624 [2024-10-07 09:51:50.138118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.624 [2024-10-07 09:51:50.138135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.142043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.142232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.142249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.145793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.146111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.146128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.149264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.149455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.149472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.153194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.153241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.153256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.157575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.157876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.157893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.164512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.164738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.164754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.169775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.169969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.169986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.175096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.175425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.175441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.183692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.183972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.183989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.188250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.188442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.188461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.191633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.191824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.191840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.194936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.195265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.195282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.198556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.198755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.198771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.202744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.202949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.202966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.206609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.206685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.206700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.213843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.214036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.214052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.218873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.219192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.219210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.222232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.222424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.222440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.225471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.225671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.225687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.231103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.231418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.231434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.238189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.238451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.238468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.248332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.248601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.248630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.256953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.257175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.257191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.265370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.265717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.265734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.269744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.269919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.269935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.273169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.273340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.273357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.277264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.625 [2024-10-07 09:51:50.277452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.625 [2024-10-07 09:51:50.277469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.625 [2024-10-07 09:51:50.281912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.626 [2024-10-07 09:51:50.282081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.626 [2024-10-07 09:51:50.282097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.888 [2024-10-07 09:51:50.285967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.888 [2024-10-07 09:51:50.286284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-07 09:51:50.286301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.888 [2024-10-07 09:51:50.289717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.888 [2024-10-07 09:51:50.289880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-07 09:51:50.289896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.888 [2024-10-07 09:51:50.293371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.888 [2024-10-07 09:51:50.293716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.888 [2024-10-07 09:51:50.293733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.297720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.297883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.297899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.301500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.301668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.301685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.305516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.305586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.305601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.309652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.309697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.309712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.314529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.314587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.314605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.321867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.321921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.321936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.325824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.325867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.325882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.330878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.331141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.331157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.338565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.338631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.338647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.342402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.342457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.342472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.345488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.345537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.345552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.349181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.349242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.349258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.353610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.353858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.353873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.363717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.363985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.364000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.372159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.372259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.372274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.379960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.380252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.380268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.384170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.384224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.384239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.387165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.387222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.387237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.390182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.390235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.390250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.393225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.393282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.393297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.889 6874.00 IOPS, 859.25 MiB/s [2024-10-07 09:51:50.397236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.397289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.397305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.400270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.400400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.400415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.404398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.404457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.404472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.407411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.407473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.407488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.410400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.410451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.410466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.889 [2024-10-07 09:51:50.413763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.889 [2024-10-07 09:51:50.413814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.889 [2024-10-07 09:51:50.413829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.417713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.417766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.417781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.420792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.420839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.420855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.423812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.423878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.423894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.427936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.427981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.427997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.431495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.431545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.431563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.435133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.435185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.435200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.440970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.441034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.441050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.445201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.445269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.445284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.452711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.453005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.453022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.461932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.462207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.462224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.471231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.471554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.471570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.479326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.479636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.479651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.488398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.488685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.488708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.497565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.497648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.497663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.506680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.506788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.506802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.515854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.516147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.516163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.525790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.526018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.526033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.533364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.533432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.533448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.538379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.538445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.538460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.890 [2024-10-07 09:51:50.543983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:50.890 [2024-10-07 09:51:50.544155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.890 [2024-10-07 09:51:50.544170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.550037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.550085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.550100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.557703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.557802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.557820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.565430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.565492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.565507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.570843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.570910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.570926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.576599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.576868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.576883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.581708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.581754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.581770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.586134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.586195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.586210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.593998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.594058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.594072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.597491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.597549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.597564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.601173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.601228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.601244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.604748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.604808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.604823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.607990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.608042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.608057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.611329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.611385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.611400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.614825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.614886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.614901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.618365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.618418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.618433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.621865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.621921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.621936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.625355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.625419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.625434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.628892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.628952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.628967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.633722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.633803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.633818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.638177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.638237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.638252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.641429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.641481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.641496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.646349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.646629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.646645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.650354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.650408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.650424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.654099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.654156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.654171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.657592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.152 [2024-10-07 09:51:50.657667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.152 [2024-10-07 09:51:50.657682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.152 [2024-10-07 09:51:50.661996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.662057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.662072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.665408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.665474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.665489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.671316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.671599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.671621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.677541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.677833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.677849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.683027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.683108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.683124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.688190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.688246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.688261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.692958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.693251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.693273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.699369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.699415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.699430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.703956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.704008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.704023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.707356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.707399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.707414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.712034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.712105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.712120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.715524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.715582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.715598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.719045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.719101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.719116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.722543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.722596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.722612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.726004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.726058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.726073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.732119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.732370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.732384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.738818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.738874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.738889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.743658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.743704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.743719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.749320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.749366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.749382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.753742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.753810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.753826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.756945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.756997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.757013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.760314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.760378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.760393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.763414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.763503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.763518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.766788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.766851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.766866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.772659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.772759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.772774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.776782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.776860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.776875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.780274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.780336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.153 [2024-10-07 09:51:50.780350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.153 [2024-10-07 09:51:50.783934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.153 [2024-10-07 09:51:50.783979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.154 [2024-10-07 09:51:50.783994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.154 [2024-10-07 09:51:50.788103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.154 [2024-10-07 09:51:50.788205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.154 [2024-10-07 09:51:50.788222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.154 [2024-10-07 09:51:50.793783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.154 [2024-10-07 09:51:50.793881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.154 [2024-10-07 09:51:50.793896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.154 [2024-10-07 09:51:50.800802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.154 [2024-10-07 09:51:50.801130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.154 [2024-10-07 09:51:50.801146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.154 [2024-10-07 09:51:50.810015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.154 [2024-10-07 09:51:50.810250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.154 [2024-10-07 09:51:50.810266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.416 [2024-10-07 09:51:50.820308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.416 [2024-10-07 09:51:50.820622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.416 [2024-10-07 09:51:50.820638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.416 [2024-10-07 09:51:50.830005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.416 [2024-10-07 09:51:50.830329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.416 [2024-10-07 09:51:50.830345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.416 [2024-10-07 09:51:50.840718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.416 [2024-10-07 09:51:50.840992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.416 [2024-10-07 09:51:50.841008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.416 [2024-10-07 09:51:50.851101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.416 [2024-10-07 09:51:50.851438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.416 [2024-10-07 09:51:50.851454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.416 [2024-10-07 09:51:50.861140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.416 [2024-10-07 09:51:50.861396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.416 [2024-10-07 09:51:50.861412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.416 [2024-10-07 09:51:50.870851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.416 [2024-10-07 09:51:50.871141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.416 [2024-10-07 09:51:50.871163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.416 [2024-10-07 09:51:50.880726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.416 [2024-10-07 09:51:50.880963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.416 [2024-10-07 09:51:50.880978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.416 [2024-10-07 09:51:50.887716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.887778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.887793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.891834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.891893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.891909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.894895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.894958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.894973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.898129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.898196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.898211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.901467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.901521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.901536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.906163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.906402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.906418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.909958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.910030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.910048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.913838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.914004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.914020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.923598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.923702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.923717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.931806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.932057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.932072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.938694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.938743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.938758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.943342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.943388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.943403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.948572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.948633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.948649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.954022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.954076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.954092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.959670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.959728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.959744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.965171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.965442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.965458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.970870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.970949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.970964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.976401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.976468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.976483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.981478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.981566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.981581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.987020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.987068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.987083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.417 [2024-10-07 09:51:50.993065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.417 [2024-10-07 09:51:50.993131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.417 [2024-10-07 09:51:50.993146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.000624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.000890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.000907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.008336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.008597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.008613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.012858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.012914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.012930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.016291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.016354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.016370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.019788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.019842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.019857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.023568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.023651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.023667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.028007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.028064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.028080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.031427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.031486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.031502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.034720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.034775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.034790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.039129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.039198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.039213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.042925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.042978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.042994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.046341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.046393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.046411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.049595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.049657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.049673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.053740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.053812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.053826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.059881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.060106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.060121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.064372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.064425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.064441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.068992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.069046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.069061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.072467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.072524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.072540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.418 [2024-10-07 09:51:51.076511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.418 [2024-10-07 09:51:51.076571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.418 [2024-10-07 09:51:51.076587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.081132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.081231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.081247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.084628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.084689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.084704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.087875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.087928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.087944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.091089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.091155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.091170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.094245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.094304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.094319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.097436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.097483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.097499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.101170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.101232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.101248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.106543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.106850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.106867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.110809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.110875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.110891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.114067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.114138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.114152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.117736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.117812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.117827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.121435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.121502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.121517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.125186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.125249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.125265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.128483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.128535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.128550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.131814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.131888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.131903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.139505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.139564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.139579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.145205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.145252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.145267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.151102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.151154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.151169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.681 [2024-10-07 09:51:51.155404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.681 [2024-10-07 09:51:51.155461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.681 [2024-10-07 09:51:51.155479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.158426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.158483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.158498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.161544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.161600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.161615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.165122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.165216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.165231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.171737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.171813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.171828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.175448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.175505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.175520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.179281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.179332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.179348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.184947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.185262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.185279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.190332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.190398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.190413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.193621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.193689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.193704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.197241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.197341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.197356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.204024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.204225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.204241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.213159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.213443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.213459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.221859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.222092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.222107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.227012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.227079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.227094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.232496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.232738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.232754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.238482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.238551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.238566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.244494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.244560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.244575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.250978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.251044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.251059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.256874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.256920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.256936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.263418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.263686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.263702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.271884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.272149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.272164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.282109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.282376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.282400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.292008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.292275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.292290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.299761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.299853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.299868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.303532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.303585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.303601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.306796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.306854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.306875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.310395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.682 [2024-10-07 09:51:51.310463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.682 [2024-10-07 09:51:51.310479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.682 [2024-10-07 09:51:51.313734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.683 [2024-10-07 09:51:51.313813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.683 [2024-10-07 09:51:51.313829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.683 [2024-10-07 09:51:51.318049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.683 [2024-10-07 09:51:51.318303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.683 [2024-10-07 09:51:51.318320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.683 [2024-10-07 09:51:51.321959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.683 [2024-10-07 09:51:51.322012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.683 [2024-10-07 09:51:51.322027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.683 [2024-10-07 09:51:51.325423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.683 [2024-10-07 09:51:51.325514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.683 [2024-10-07 09:51:51.325529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.683 [2024-10-07 09:51:51.329468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.683 [2024-10-07 09:51:51.329765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.683 [2024-10-07 09:51:51.329781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.683 [2024-10-07 09:51:51.334018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.683 [2024-10-07 09:51:51.334074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.683 [2024-10-07 09:51:51.334089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.683 [2024-10-07 09:51:51.337639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.683 [2024-10-07 09:51:51.337693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.683 [2024-10-07 09:51:51.337709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.683 [2024-10-07 09:51:51.341381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.341605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.341627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.344921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.344974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.344989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.349780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.349861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.349877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.353323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.353393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.353408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.361123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.361198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.361213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.364644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.364749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.364765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.368173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.368226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.368241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.371548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.371645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.371660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.377515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.377578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.377596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.381506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.381754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.381769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.386215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.386309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.386324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.392039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.392125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.392140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.944 [2024-10-07 09:51:51.395689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.395743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.395758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.944 6473.50 IOPS, 809.19 MiB/s [2024-10-07 09:51:51.401953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b3f60) with pdu=0x2000198fef90 00:30:51.944 [2024-10-07 09:51:51.402212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.944 [2024-10-07 09:51:51.402227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.944 00:30:51.944 Latency(us) 00:30:51.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.944 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:51.944 nvme0n1 : 2.01 6462.09 807.76 0.00 0.00 2470.69 1310.72 10977.28 00:30:51.944 =================================================================================================================== 00:30:51.944 Total : 6462.09 807.76 0.00 0.00 2470.69 1310.72 10977.28 00:30:51.944 { 00:30:51.944 "results": [ 00:30:51.944 { 00:30:51.944 "job": "nvme0n1", 00:30:51.944 "core_mask": "0x2", 00:30:51.944 "workload": "randwrite", 00:30:51.944 "status": "finished", 00:30:51.944 "queue_depth": 16, 00:30:51.944 "io_size": 131072, 00:30:51.944 "runtime": 2.006471, 00:30:51.945 "iops": 6462.091901652204, 00:30:51.945 "mibps": 807.7614877065255, 00:30:51.945 "io_failed": 0, 00:30:51.945 "io_timeout": 0, 00:30:51.945 "avg_latency_us": 2470.691552264898, 00:30:51.945 "min_latency_us": 1310.72, 00:30:51.945 "max_latency_us": 10977.28 00:30:51.945 } 00:30:51.945 ], 00:30:51.945 "core_count": 1 00:30:51.945 } 00:30:51.945 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:51.945 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:51.945 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:51.945 | .driver_specific 00:30:51.945 | .nvme_error 00:30:51.945 | .status_code 00:30:51.945 | .command_transient_transport_error' 00:30:51.945 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 418 > 0 )) 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3539826 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' -z 3539826 ']' 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # kill -0 3539826 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # uname 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3539826 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3539826' 00:30:52.205 killing process with pid 3539826 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # kill 3539826 00:30:52.205 Received shutdown signal, test time was about 2.000000 seconds 00:30:52.205 00:30:52.205 Latency(us) 00:30:52.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.205 =================================================================================================================== 00:30:52.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@977 -- # wait 3539826 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3537376 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' -z 3537376 ']' 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # kill -0 3537376 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # uname 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:52.205 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3537376 00:30:52.467 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:30:52.467 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:30:52.467 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3537376' 00:30:52.467 killing process with pid 3537376 00:30:52.467 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # kill 3537376 00:30:52.467 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@977 -- # wait 3537376 00:30:52.467 00:30:52.467 real 0m16.577s 00:30:52.467 user 0m32.794s 00:30:52.467 sys 0m3.618s 00:30:52.467 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # xtrace_disable 00:30:52.467 09:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.467 ************************************ 00:30:52.467 END TEST nvmf_digest_error 00:30:52.467 ************************************ 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.467 rmmod nvme_tcp 00:30:52.467 rmmod nvme_fabrics 00:30:52.467 rmmod nvme_keyring 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 3537376 ']' 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 3537376 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@953 -- # '[' -z 3537376 ']' 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@957 -- # kill -0 3537376 00:30:52.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 957: kill: (3537376) - No such process 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@980 -- # echo 'Process with pid 3537376 is not found' 00:30:52.467 Process with pid 3537376 is not found 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.467 09:51:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:55.009 00:30:55.009 real 0m43.832s 00:30:55.009 user 1m8.613s 00:30:55.009 sys 0m13.354s 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # xtrace_disable 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:55.009 ************************************ 00:30:55.009 END TEST nvmf_digest 00:30:55.009 ************************************ 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.009 ************************************ 00:30:55.009 START TEST nvmf_bdevperf 00:30:55.009 ************************************ 00:30:55.009 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:55.009 * Looking for test storage... 00:30:55.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1626 -- # lcov --version 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:30:55.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.010 --rc genhtml_branch_coverage=1 00:30:55.010 --rc genhtml_function_coverage=1 00:30:55.010 --rc genhtml_legend=1 00:30:55.010 --rc geninfo_all_blocks=1 00:30:55.010 --rc geninfo_unexecuted_blocks=1 00:30:55.010 00:30:55.010 ' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:30:55.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.010 --rc genhtml_branch_coverage=1 00:30:55.010 --rc genhtml_function_coverage=1 00:30:55.010 --rc genhtml_legend=1 00:30:55.010 --rc geninfo_all_blocks=1 00:30:55.010 --rc geninfo_unexecuted_blocks=1 00:30:55.010 00:30:55.010 ' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:30:55.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.010 --rc genhtml_branch_coverage=1 00:30:55.010 --rc genhtml_function_coverage=1 00:30:55.010 --rc genhtml_legend=1 00:30:55.010 --rc geninfo_all_blocks=1 00:30:55.010 --rc geninfo_unexecuted_blocks=1 00:30:55.010 00:30:55.010 ' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:30:55.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.010 --rc genhtml_branch_coverage=1 00:30:55.010 --rc genhtml_function_coverage=1 00:30:55.010 --rc genhtml_legend=1 00:30:55.010 --rc geninfo_all_blocks=1 00:30:55.010 --rc geninfo_unexecuted_blocks=1 00:30:55.010 00:30:55.010 ' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:55.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.010 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:55.011 09:51:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:03.150 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:03.150 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:03.150 Found net devices under 0000:31:00.0: cvl_0_0 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:03.150 Found net devices under 0000:31:00.1: cvl_0_1 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:03.150 09:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:03.150 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:03.150 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:03.150 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:03.150 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:03.150 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:03.150 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:03.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:03.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:31:03.151 00:31:03.151 --- 10.0.0.2 ping statistics --- 00:31:03.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.151 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:03.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:03.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:31:03.151 00:31:03.151 --- 10.0.0.1 ping statistics --- 00:31:03.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.151 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3544885 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3544885 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # '[' -z 3544885 ']' 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local max_retries=100 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@843 -- # xtrace_disable 00:31:03.151 09:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:03.151 [2024-10-07 09:52:02.270700] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:03.151 [2024-10-07 09:52:02.270768] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.151 [2024-10-07 09:52:02.358997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:03.151 [2024-10-07 09:52:02.454931] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.151 [2024-10-07 09:52:02.454981] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.151 [2024-10-07 09:52:02.454989] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.151 [2024-10-07 09:52:02.454997] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.151 [2024-10-07 09:52:02.455003] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.151 [2024-10-07 09:52:02.456194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:03.151 [2024-10-07 09:52:02.456358] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.151 [2024-10-07 09:52:02.456359] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@867 -- # return 0 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@733 -- # xtrace_disable 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:03.724 [2024-10-07 09:52:03.150951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:03.724 Malloc0 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:03.724 [2024-10-07 09:52:03.224207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:03.724 { 00:31:03.724 "params": { 00:31:03.724 "name": "Nvme$subsystem", 00:31:03.724 "trtype": "$TEST_TRANSPORT", 00:31:03.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.724 "adrfam": "ipv4", 00:31:03.724 "trsvcid": "$NVMF_PORT", 00:31:03.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.724 "hdgst": ${hdgst:-false}, 00:31:03.724 "ddgst": ${ddgst:-false} 00:31:03.724 }, 00:31:03.724 "method": "bdev_nvme_attach_controller" 00:31:03.724 } 00:31:03.724 EOF 00:31:03.724 )") 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:31:03.724 09:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:03.725 "params": { 00:31:03.725 "name": "Nvme1", 00:31:03.725 "trtype": "tcp", 00:31:03.725 "traddr": "10.0.0.2", 00:31:03.725 "adrfam": "ipv4", 00:31:03.725 "trsvcid": "4420", 00:31:03.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:03.725 "hdgst": false, 00:31:03.725 "ddgst": false 00:31:03.725 }, 00:31:03.725 "method": "bdev_nvme_attach_controller" 00:31:03.725 }' 00:31:03.725 [2024-10-07 09:52:03.291571] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:03.725 [2024-10-07 09:52:03.291646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3544947 ] 00:31:03.725 [2024-10-07 09:52:03.377590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.986 [2024-10-07 09:52:03.474936] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.247 Running I/O for 1 seconds... 00:31:05.192 8517.00 IOPS, 33.27 MiB/s 00:31:05.192 Latency(us) 00:31:05.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.192 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:05.192 Verification LBA range: start 0x0 length 0x4000 00:31:05.192 Nvme1n1 : 1.01 8563.59 33.45 0.00 0.00 14886.59 2730.67 14636.37 00:31:05.192 =================================================================================================================== 00:31:05.192 Total : 8563.59 33.45 0.00 0.00 14886.59 2730.67 14636.37 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3545285 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:05.192 { 00:31:05.192 "params": { 00:31:05.192 "name": "Nvme$subsystem", 00:31:05.192 "trtype": "$TEST_TRANSPORT", 00:31:05.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.192 "adrfam": "ipv4", 00:31:05.192 "trsvcid": "$NVMF_PORT", 00:31:05.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.192 "hdgst": ${hdgst:-false}, 00:31:05.192 "ddgst": ${ddgst:-false} 00:31:05.192 }, 00:31:05.192 "method": "bdev_nvme_attach_controller" 00:31:05.192 } 00:31:05.192 EOF 00:31:05.192 )") 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:31:05.192 09:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:05.192 "params": { 00:31:05.192 "name": "Nvme1", 00:31:05.192 "trtype": "tcp", 00:31:05.192 "traddr": "10.0.0.2", 00:31:05.192 "adrfam": "ipv4", 00:31:05.192 "trsvcid": "4420", 00:31:05.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.192 "hdgst": false, 00:31:05.192 "ddgst": false 00:31:05.192 }, 00:31:05.192 "method": "bdev_nvme_attach_controller" 00:31:05.192 }' 00:31:05.453 [2024-10-07 09:52:04.882054] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:05.453 [2024-10-07 09:52:04.882107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3545285 ] 00:31:05.453 [2024-10-07 09:52:04.959866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.453 [2024-10-07 09:52:05.023202] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.714 Running I/O for 15 seconds... 00:31:08.299 11857.00 IOPS, 46.32 MiB/s 11743.00 IOPS, 45.87 MiB/s 09:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3544885 00:31:08.299 09:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:08.299 [2024-10-07 09:52:07.838425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.299 [2024-10-07 09:52:07.838467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.838985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.838992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.299 [2024-10-07 09:52:07.839002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.299 [2024-10-07 09:52:07.839009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.300 [2024-10-07 09:52:07.839537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.300 [2024-10-07 09:52:07.839555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.300 [2024-10-07 09:52:07.839572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.300 [2024-10-07 09:52:07.839589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.300 [2024-10-07 09:52:07.839606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.300 [2024-10-07 09:52:07.839627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.300 [2024-10-07 09:52:07.839644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.300 [2024-10-07 09:52:07.839661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.300 [2024-10-07 09:52:07.839677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.300 [2024-10-07 09:52:07.839687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.839984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.839995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.301 [2024-10-07 09:52:07.840356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.301 [2024-10-07 09:52:07.840363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.302 [2024-10-07 09:52:07.840787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204cc30 is same with the state(6) to be set 00:31:08.302 [2024-10-07 09:52:07.840806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.302 [2024-10-07 09:52:07.840812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.302 [2024-10-07 09:52:07.840818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106240 len:8 PRP1 0x0 PRP2 0x0 00:31:08.302 [2024-10-07 09:52:07.840827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.302 [2024-10-07 09:52:07.840866] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x204cc30 was disconnected and freed. reset controller. 00:31:08.302 [2024-10-07 09:52:07.844352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.302 [2024-10-07 09:52:07.844403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.302 [2024-10-07 09:52:07.845191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.302 [2024-10-07 09:52:07.845209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.302 [2024-10-07 09:52:07.845217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.302 [2024-10-07 09:52:07.845434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.302 [2024-10-07 09:52:07.845657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.302 [2024-10-07 09:52:07.845665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.302 [2024-10-07 09:52:07.845674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.302 [2024-10-07 09:52:07.849157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.302 [2024-10-07 09:52:07.858392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.302 [2024-10-07 09:52:07.859081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.302 [2024-10-07 09:52:07.859121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.302 [2024-10-07 09:52:07.859132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.302 [2024-10-07 09:52:07.859370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.302 [2024-10-07 09:52:07.859589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.302 [2024-10-07 09:52:07.859598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.302 [2024-10-07 09:52:07.859607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.302 [2024-10-07 09:52:07.863316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.302 [2024-10-07 09:52:07.872155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.302 [2024-10-07 09:52:07.872721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.302 [2024-10-07 09:52:07.872761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.302 [2024-10-07 09:52:07.872774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.302 [2024-10-07 09:52:07.873013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.302 [2024-10-07 09:52:07.873233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.302 [2024-10-07 09:52:07.873242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.302 [2024-10-07 09:52:07.873250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.303 [2024-10-07 09:52:07.876744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.303 [2024-10-07 09:52:07.885976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.303 [2024-10-07 09:52:07.886654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.303 [2024-10-07 09:52:07.886694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.303 [2024-10-07 09:52:07.886706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.303 [2024-10-07 09:52:07.886948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.303 [2024-10-07 09:52:07.887167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.303 [2024-10-07 09:52:07.887176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.303 [2024-10-07 09:52:07.887184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.303 [2024-10-07 09:52:07.890688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.303 [2024-10-07 09:52:07.899722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.303 [2024-10-07 09:52:07.900389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.303 [2024-10-07 09:52:07.900432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.303 [2024-10-07 09:52:07.900443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.303 [2024-10-07 09:52:07.900690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.303 [2024-10-07 09:52:07.900911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.303 [2024-10-07 09:52:07.900920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.303 [2024-10-07 09:52:07.900928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.303 [2024-10-07 09:52:07.904415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.303 [2024-10-07 09:52:07.913446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.303 [2024-10-07 09:52:07.914116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.303 [2024-10-07 09:52:07.914160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.303 [2024-10-07 09:52:07.914171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.303 [2024-10-07 09:52:07.914410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.303 [2024-10-07 09:52:07.914640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.303 [2024-10-07 09:52:07.914650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.303 [2024-10-07 09:52:07.914658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.303 [2024-10-07 09:52:07.918151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.303 [2024-10-07 09:52:07.927203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.303 [2024-10-07 09:52:07.927896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.303 [2024-10-07 09:52:07.927941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.303 [2024-10-07 09:52:07.927952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.303 [2024-10-07 09:52:07.928191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.303 [2024-10-07 09:52:07.928411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.303 [2024-10-07 09:52:07.928421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.303 [2024-10-07 09:52:07.928434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.303 [2024-10-07 09:52:07.931938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.303 [2024-10-07 09:52:07.940996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.303 [2024-10-07 09:52:07.941607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.303 [2024-10-07 09:52:07.941663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.303 [2024-10-07 09:52:07.941674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.303 [2024-10-07 09:52:07.941915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.303 [2024-10-07 09:52:07.942136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.303 [2024-10-07 09:52:07.942146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.303 [2024-10-07 09:52:07.942154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.303 [2024-10-07 09:52:07.945652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.303 [2024-10-07 09:52:07.954889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.303 [2024-10-07 09:52:07.955572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.303 [2024-10-07 09:52:07.955632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.303 [2024-10-07 09:52:07.955645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.303 [2024-10-07 09:52:07.955888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.303 [2024-10-07 09:52:07.956109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.303 [2024-10-07 09:52:07.956118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.303 [2024-10-07 09:52:07.956126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.565 [2024-10-07 09:52:07.959629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.565 [2024-10-07 09:52:07.968670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.565 [2024-10-07 09:52:07.969181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.565 [2024-10-07 09:52:07.969233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.565 [2024-10-07 09:52:07.969246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.565 [2024-10-07 09:52:07.969490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.565 [2024-10-07 09:52:07.969723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.565 [2024-10-07 09:52:07.969734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.565 [2024-10-07 09:52:07.969742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.565 [2024-10-07 09:52:07.973241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.565 [2024-10-07 09:52:07.982488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.565 [2024-10-07 09:52:07.983149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.565 [2024-10-07 09:52:07.983206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.565 [2024-10-07 09:52:07.983218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.565 [2024-10-07 09:52:07.983464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.565 [2024-10-07 09:52:07.983698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.565 [2024-10-07 09:52:07.983708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.565 [2024-10-07 09:52:07.983718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.565 [2024-10-07 09:52:07.987224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.565 [2024-10-07 09:52:07.996361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.565 [2024-10-07 09:52:07.997052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.565 [2024-10-07 09:52:07.997115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.565 [2024-10-07 09:52:07.997127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.565 [2024-10-07 09:52:07.997380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.565 [2024-10-07 09:52:07.997602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.565 [2024-10-07 09:52:07.997613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.565 [2024-10-07 09:52:07.997635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.565 [2024-10-07 09:52:08.001145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.565 [2024-10-07 09:52:08.010210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.565 [2024-10-07 09:52:08.010956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.565 [2024-10-07 09:52:08.011020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.565 [2024-10-07 09:52:08.011032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.565 [2024-10-07 09:52:08.011285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.565 [2024-10-07 09:52:08.011508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.565 [2024-10-07 09:52:08.011517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.565 [2024-10-07 09:52:08.011525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.565 [2024-10-07 09:52:08.015052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.565 [2024-10-07 09:52:08.024104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.565 [2024-10-07 09:52:08.024821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.565 [2024-10-07 09:52:08.024885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.565 [2024-10-07 09:52:08.024898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.565 [2024-10-07 09:52:08.025157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.565 [2024-10-07 09:52:08.025381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.565 [2024-10-07 09:52:08.025390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.565 [2024-10-07 09:52:08.025398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.565 [2024-10-07 09:52:08.028912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.565 [2024-10-07 09:52:08.037965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.565 [2024-10-07 09:52:08.038650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.565 [2024-10-07 09:52:08.038714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.565 [2024-10-07 09:52:08.038728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.565 [2024-10-07 09:52:08.038997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.039222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.039233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.566 [2024-10-07 09:52:08.039241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.566 [2024-10-07 09:52:08.042763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.566 [2024-10-07 09:52:08.051812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.566 [2024-10-07 09:52:08.052476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.566 [2024-10-07 09:52:08.052540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.566 [2024-10-07 09:52:08.052554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.566 [2024-10-07 09:52:08.052820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.053045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.053055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.566 [2024-10-07 09:52:08.053063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.566 [2024-10-07 09:52:08.056568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.566 [2024-10-07 09:52:08.065627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.566 [2024-10-07 09:52:08.066330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.566 [2024-10-07 09:52:08.066393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.566 [2024-10-07 09:52:08.066406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.566 [2024-10-07 09:52:08.066674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.066899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.066908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.566 [2024-10-07 09:52:08.066924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.566 [2024-10-07 09:52:08.070428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.566 [2024-10-07 09:52:08.079471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.566 [2024-10-07 09:52:08.080148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.566 [2024-10-07 09:52:08.080211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.566 [2024-10-07 09:52:08.080225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.566 [2024-10-07 09:52:08.080477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.080711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.080722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.566 [2024-10-07 09:52:08.080730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.566 [2024-10-07 09:52:08.084238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.566 [2024-10-07 09:52:08.093306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.566 [2024-10-07 09:52:08.093985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.566 [2024-10-07 09:52:08.094050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.566 [2024-10-07 09:52:08.094063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.566 [2024-10-07 09:52:08.094314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.094537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.094547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.566 [2024-10-07 09:52:08.094556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.566 [2024-10-07 09:52:08.098082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.566 [2024-10-07 09:52:08.107139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.566 [2024-10-07 09:52:08.107888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.566 [2024-10-07 09:52:08.107952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.566 [2024-10-07 09:52:08.107965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.566 [2024-10-07 09:52:08.108217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.108440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.108450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.566 [2024-10-07 09:52:08.108458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.566 [2024-10-07 09:52:08.111972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.566 [2024-10-07 09:52:08.121150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.566 [2024-10-07 09:52:08.121889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.566 [2024-10-07 09:52:08.121960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.566 [2024-10-07 09:52:08.121973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.566 [2024-10-07 09:52:08.122225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.122448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.122460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.566 [2024-10-07 09:52:08.122469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.566 [2024-10-07 09:52:08.125984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.566 [2024-10-07 09:52:08.135060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.566 [2024-10-07 09:52:08.135831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.566 [2024-10-07 09:52:08.135893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.566 [2024-10-07 09:52:08.135906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.566 [2024-10-07 09:52:08.136158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.136381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.136391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.566 [2024-10-07 09:52:08.136400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.566 [2024-10-07 09:52:08.139935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.566 [2024-10-07 09:52:08.148987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.566 [2024-10-07 09:52:08.149665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.566 [2024-10-07 09:52:08.149729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.566 [2024-10-07 09:52:08.149744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.566 [2024-10-07 09:52:08.149997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.150220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.150231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.566 [2024-10-07 09:52:08.150239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.566 [2024-10-07 09:52:08.153764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.566 [2024-10-07 09:52:08.162803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.566 [2024-10-07 09:52:08.163414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.566 [2024-10-07 09:52:08.163477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.566 [2024-10-07 09:52:08.163490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.566 [2024-10-07 09:52:08.163757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.163993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.164002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.566 [2024-10-07 09:52:08.164011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.566 [2024-10-07 09:52:08.167515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.566 [2024-10-07 09:52:08.176562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.566 [2024-10-07 09:52:08.177184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.566 [2024-10-07 09:52:08.177213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.566 [2024-10-07 09:52:08.177222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.566 [2024-10-07 09:52:08.177443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.566 [2024-10-07 09:52:08.177669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.566 [2024-10-07 09:52:08.177678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.567 [2024-10-07 09:52:08.177686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.567 [2024-10-07 09:52:08.181182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.567 [2024-10-07 09:52:08.190445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.567 [2024-10-07 09:52:08.191142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.567 [2024-10-07 09:52:08.191205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.567 [2024-10-07 09:52:08.191218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.567 [2024-10-07 09:52:08.191470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.567 [2024-10-07 09:52:08.191709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.567 [2024-10-07 09:52:08.191720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.567 [2024-10-07 09:52:08.191729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.567 [2024-10-07 09:52:08.195237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.567 [2024-10-07 09:52:08.204279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.567 [2024-10-07 09:52:08.204955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.567 [2024-10-07 09:52:08.205014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.567 [2024-10-07 09:52:08.205026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.567 [2024-10-07 09:52:08.205275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.567 [2024-10-07 09:52:08.205498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.567 [2024-10-07 09:52:08.205507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.567 [2024-10-07 09:52:08.205516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.567 [2024-10-07 09:52:08.209041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.567 [2024-10-07 09:52:08.218089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.567 [2024-10-07 09:52:08.218812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.567 [2024-10-07 09:52:08.218875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.567 [2024-10-07 09:52:08.218888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.567 [2024-10-07 09:52:08.219141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.567 [2024-10-07 09:52:08.219364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.567 [2024-10-07 09:52:08.219374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.567 [2024-10-07 09:52:08.219382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.567 [2024-10-07 09:52:08.222913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.830 [2024-10-07 09:52:08.231988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.830 [2024-10-07 09:52:08.232704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.830 [2024-10-07 09:52:08.232768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.830 [2024-10-07 09:52:08.232782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.830 [2024-10-07 09:52:08.233035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.830 [2024-10-07 09:52:08.233259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.830 [2024-10-07 09:52:08.233268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.830 [2024-10-07 09:52:08.233277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.830 [2024-10-07 09:52:08.236802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.830 [2024-10-07 09:52:08.245867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.830 [2024-10-07 09:52:08.246449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.830 [2024-10-07 09:52:08.246477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.830 [2024-10-07 09:52:08.246487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.830 [2024-10-07 09:52:08.246716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.830 [2024-10-07 09:52:08.246935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.830 [2024-10-07 09:52:08.246952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.830 [2024-10-07 09:52:08.246960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.830 [2024-10-07 09:52:08.250462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.830 [2024-10-07 09:52:08.259707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.830 [2024-10-07 09:52:08.260176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.830 [2024-10-07 09:52:08.260203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.830 [2024-10-07 09:52:08.260219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.830 [2024-10-07 09:52:08.260440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.830 [2024-10-07 09:52:08.260673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.830 [2024-10-07 09:52:08.260686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.830 [2024-10-07 09:52:08.260695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.830 [2024-10-07 09:52:08.264197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.830 [2024-10-07 09:52:08.273444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.830 [2024-10-07 09:52:08.274053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.830 [2024-10-07 09:52:08.274075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.830 [2024-10-07 09:52:08.274084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.830 [2024-10-07 09:52:08.274301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.830 [2024-10-07 09:52:08.274519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.830 [2024-10-07 09:52:08.274528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.830 [2024-10-07 09:52:08.274536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.830 [2024-10-07 09:52:08.278035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.830 [2024-10-07 09:52:08.287272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.830 [2024-10-07 09:52:08.287842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.830 [2024-10-07 09:52:08.287864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.830 [2024-10-07 09:52:08.287872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.830 [2024-10-07 09:52:08.288090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.830 [2024-10-07 09:52:08.288308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.830 [2024-10-07 09:52:08.288326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.830 [2024-10-07 09:52:08.288333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.830 [2024-10-07 09:52:08.291849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.830 [2024-10-07 09:52:08.301086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.830 [2024-10-07 09:52:08.301708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.830 [2024-10-07 09:52:08.301752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.830 [2024-10-07 09:52:08.301762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.830 [2024-10-07 09:52:08.301997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.830 [2024-10-07 09:52:08.302217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.830 [2024-10-07 09:52:08.302233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.830 [2024-10-07 09:52:08.302241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.830 [2024-10-07 09:52:08.305753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.830 [2024-10-07 09:52:08.314992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.830 [2024-10-07 09:52:08.315654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.830 [2024-10-07 09:52:08.315718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.830 [2024-10-07 09:52:08.315731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.830 [2024-10-07 09:52:08.315983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.830 [2024-10-07 09:52:08.316206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.830 [2024-10-07 09:52:08.316216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.830 [2024-10-07 09:52:08.316224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.830 [2024-10-07 09:52:08.319747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.830 [2024-10-07 09:52:08.328802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.830 [2024-10-07 09:52:08.329517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.830 [2024-10-07 09:52:08.329579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.830 [2024-10-07 09:52:08.329592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.830 [2024-10-07 09:52:08.329858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.830 [2024-10-07 09:52:08.330082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.830 [2024-10-07 09:52:08.330092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.830 [2024-10-07 09:52:08.330101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.830 [2024-10-07 09:52:08.333604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.830 9845.67 IOPS, 38.46 MiB/s [2024-10-07 09:52:08.343313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.830 [2024-10-07 09:52:08.344022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.830 [2024-10-07 09:52:08.344085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.830 [2024-10-07 09:52:08.344098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.830 [2024-10-07 09:52:08.344350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.830 [2024-10-07 09:52:08.344574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.830 [2024-10-07 09:52:08.344585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.830 [2024-10-07 09:52:08.344594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.830 [2024-10-07 09:52:08.348119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.831 [2024-10-07 09:52:08.357201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.831 [2024-10-07 09:52:08.357923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.831 [2024-10-07 09:52:08.357986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.831 [2024-10-07 09:52:08.357999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.831 [2024-10-07 09:52:08.358251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.831 [2024-10-07 09:52:08.358475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.831 [2024-10-07 09:52:08.358484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.831 [2024-10-07 09:52:08.358492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.831 [2024-10-07 09:52:08.362012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.831 [2024-10-07 09:52:08.371062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.831 [2024-10-07 09:52:08.371738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.831 [2024-10-07 09:52:08.371801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.831 [2024-10-07 09:52:08.371814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.831 [2024-10-07 09:52:08.372066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.831 [2024-10-07 09:52:08.372289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.831 [2024-10-07 09:52:08.372299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.831 [2024-10-07 09:52:08.372307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.831 [2024-10-07 09:52:08.375824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.831 [2024-10-07 09:52:08.384870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.831 [2024-10-07 09:52:08.385473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.831 [2024-10-07 09:52:08.385536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.831 [2024-10-07 09:52:08.385549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.831 [2024-10-07 09:52:08.385816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.831 [2024-10-07 09:52:08.386040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.831 [2024-10-07 09:52:08.386051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.831 [2024-10-07 09:52:08.386059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.831 [2024-10-07 09:52:08.389581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.831 [2024-10-07 09:52:08.398630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.831 [2024-10-07 09:52:08.399310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.831 [2024-10-07 09:52:08.399373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.831 [2024-10-07 09:52:08.399386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.831 [2024-10-07 09:52:08.399659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.831 [2024-10-07 09:52:08.399884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.831 [2024-10-07 09:52:08.399893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.831 [2024-10-07 09:52:08.399901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.831 [2024-10-07 09:52:08.403405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.831 [2024-10-07 09:52:08.412449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.831 [2024-10-07 09:52:08.413154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.831 [2024-10-07 09:52:08.413217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.831 [2024-10-07 09:52:08.413230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.831 [2024-10-07 09:52:08.413482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.831 [2024-10-07 09:52:08.413720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.831 [2024-10-07 09:52:08.413730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.831 [2024-10-07 09:52:08.413739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.831 [2024-10-07 09:52:08.417248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.831 [2024-10-07 09:52:08.426334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.831 [2024-10-07 09:52:08.427034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.831 [2024-10-07 09:52:08.427098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.831 [2024-10-07 09:52:08.427111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.831 [2024-10-07 09:52:08.427363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.831 [2024-10-07 09:52:08.427586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.831 [2024-10-07 09:52:08.427597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.831 [2024-10-07 09:52:08.427606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.831 [2024-10-07 09:52:08.431136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.831 [2024-10-07 09:52:08.440239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.831 [2024-10-07 09:52:08.440924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.831 [2024-10-07 09:52:08.440988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.831 [2024-10-07 09:52:08.441002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.831 [2024-10-07 09:52:08.441255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.831 [2024-10-07 09:52:08.441478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.831 [2024-10-07 09:52:08.441489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.831 [2024-10-07 09:52:08.441504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.831 [2024-10-07 09:52:08.445040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.831 [2024-10-07 09:52:08.454118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.831 [2024-10-07 09:52:08.454762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.831 [2024-10-07 09:52:08.454825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.831 [2024-10-07 09:52:08.454839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.831 [2024-10-07 09:52:08.455091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.831 [2024-10-07 09:52:08.455316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.831 [2024-10-07 09:52:08.455326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.831 [2024-10-07 09:52:08.455336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.831 [2024-10-07 09:52:08.458861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.831 [2024-10-07 09:52:08.467915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.831 [2024-10-07 09:52:08.468540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.831 [2024-10-07 09:52:08.468568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.831 [2024-10-07 09:52:08.468577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.831 [2024-10-07 09:52:08.468803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.831 [2024-10-07 09:52:08.469022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.831 [2024-10-07 09:52:08.469032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.831 [2024-10-07 09:52:08.469040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.831 [2024-10-07 09:52:08.472634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.831 [2024-10-07 09:52:08.481714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.831 [2024-10-07 09:52:08.482367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.831 [2024-10-07 09:52:08.482430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:08.831 [2024-10-07 09:52:08.482443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:08.831 [2024-10-07 09:52:08.482708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:08.831 [2024-10-07 09:52:08.482933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.831 [2024-10-07 09:52:08.482943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.831 [2024-10-07 09:52:08.482951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.831 [2024-10-07 09:52:08.486457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.093 [2024-10-07 09:52:08.495551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.093 [2024-10-07 09:52:08.496161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.093 [2024-10-07 09:52:08.496189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.093 [2024-10-07 09:52:08.496198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.093 [2024-10-07 09:52:08.496417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.093 [2024-10-07 09:52:08.496643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.496661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.094 [2024-10-07 09:52:08.496669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.094 [2024-10-07 09:52:08.500177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.094 [2024-10-07 09:52:08.509312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.094 [2024-10-07 09:52:08.509978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.094 [2024-10-07 09:52:08.510041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.094 [2024-10-07 09:52:08.510054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.094 [2024-10-07 09:52:08.510306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.094 [2024-10-07 09:52:08.510530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.510539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.094 [2024-10-07 09:52:08.510548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.094 [2024-10-07 09:52:08.514074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.094 [2024-10-07 09:52:08.523129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.094 [2024-10-07 09:52:08.523797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.094 [2024-10-07 09:52:08.523860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.094 [2024-10-07 09:52:08.523873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.094 [2024-10-07 09:52:08.524126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.094 [2024-10-07 09:52:08.524348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.524358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.094 [2024-10-07 09:52:08.524367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.094 [2024-10-07 09:52:08.527896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.094 [2024-10-07 09:52:08.536944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.094 [2024-10-07 09:52:08.537613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.094 [2024-10-07 09:52:08.537689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.094 [2024-10-07 09:52:08.537702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.094 [2024-10-07 09:52:08.537962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.094 [2024-10-07 09:52:08.538185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.538195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.094 [2024-10-07 09:52:08.538202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.094 [2024-10-07 09:52:08.541737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.094 [2024-10-07 09:52:08.550810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.094 [2024-10-07 09:52:08.551387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.094 [2024-10-07 09:52:08.551415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.094 [2024-10-07 09:52:08.551424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.094 [2024-10-07 09:52:08.551655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.094 [2024-10-07 09:52:08.551874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.551883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.094 [2024-10-07 09:52:08.551891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.094 [2024-10-07 09:52:08.555387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.094 [2024-10-07 09:52:08.564629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.094 [2024-10-07 09:52:08.565302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.094 [2024-10-07 09:52:08.565367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.094 [2024-10-07 09:52:08.565381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.094 [2024-10-07 09:52:08.565648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.094 [2024-10-07 09:52:08.565877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.565887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.094 [2024-10-07 09:52:08.565895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.094 [2024-10-07 09:52:08.569406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.094 [2024-10-07 09:52:08.578458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.094 [2024-10-07 09:52:08.579199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.094 [2024-10-07 09:52:08.579263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.094 [2024-10-07 09:52:08.579276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.094 [2024-10-07 09:52:08.579528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.094 [2024-10-07 09:52:08.579765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.579775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.094 [2024-10-07 09:52:08.579791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.094 [2024-10-07 09:52:08.583300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.094 [2024-10-07 09:52:08.592370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.094 [2024-10-07 09:52:08.592959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.094 [2024-10-07 09:52:08.592988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.094 [2024-10-07 09:52:08.592997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.094 [2024-10-07 09:52:08.593217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.094 [2024-10-07 09:52:08.593434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.593444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.094 [2024-10-07 09:52:08.593452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.094 [2024-10-07 09:52:08.596958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.094 [2024-10-07 09:52:08.606233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.094 [2024-10-07 09:52:08.606984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.094 [2024-10-07 09:52:08.607049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.094 [2024-10-07 09:52:08.607062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.094 [2024-10-07 09:52:08.607315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.094 [2024-10-07 09:52:08.607538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.607547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.094 [2024-10-07 09:52:08.607556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.094 [2024-10-07 09:52:08.611082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.094 [2024-10-07 09:52:08.620146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.094 [2024-10-07 09:52:08.620873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.094 [2024-10-07 09:52:08.620938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.094 [2024-10-07 09:52:08.620952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.094 [2024-10-07 09:52:08.621206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.094 [2024-10-07 09:52:08.621432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.621443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.094 [2024-10-07 09:52:08.621451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.094 [2024-10-07 09:52:08.624979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.094 [2024-10-07 09:52:08.634044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.094 [2024-10-07 09:52:08.634765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.094 [2024-10-07 09:52:08.634836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.094 [2024-10-07 09:52:08.634849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.094 [2024-10-07 09:52:08.635101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.094 [2024-10-07 09:52:08.635325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.094 [2024-10-07 09:52:08.635335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.095 [2024-10-07 09:52:08.635343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.095 [2024-10-07 09:52:08.638864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.095 [2024-10-07 09:52:08.647969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.095 [2024-10-07 09:52:08.648674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.095 [2024-10-07 09:52:08.648737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.095 [2024-10-07 09:52:08.648751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.095 [2024-10-07 09:52:08.649002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.095 [2024-10-07 09:52:08.649226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.095 [2024-10-07 09:52:08.649237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.095 [2024-10-07 09:52:08.649246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.095 [2024-10-07 09:52:08.652767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.095 [2024-10-07 09:52:08.661821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.095 [2024-10-07 09:52:08.662405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.095 [2024-10-07 09:52:08.662434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.095 [2024-10-07 09:52:08.662443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.095 [2024-10-07 09:52:08.662669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.095 [2024-10-07 09:52:08.662891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.095 [2024-10-07 09:52:08.662900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.095 [2024-10-07 09:52:08.662908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.095 [2024-10-07 09:52:08.666408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.095 [2024-10-07 09:52:08.675669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.095 [2024-10-07 09:52:08.676239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.095 [2024-10-07 09:52:08.676262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.095 [2024-10-07 09:52:08.676270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.095 [2024-10-07 09:52:08.676488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.095 [2024-10-07 09:52:08.676722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.095 [2024-10-07 09:52:08.676740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.095 [2024-10-07 09:52:08.676749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.095 [2024-10-07 09:52:08.680245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.095 [2024-10-07 09:52:08.689531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.095 [2024-10-07 09:52:08.690198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.095 [2024-10-07 09:52:08.690262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.095 [2024-10-07 09:52:08.690275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.095 [2024-10-07 09:52:08.690527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.095 [2024-10-07 09:52:08.690767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.095 [2024-10-07 09:52:08.690777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.095 [2024-10-07 09:52:08.690785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.095 [2024-10-07 09:52:08.694293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.095 [2024-10-07 09:52:08.703347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.095 [2024-10-07 09:52:08.703868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.095 [2024-10-07 09:52:08.703900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.095 [2024-10-07 09:52:08.703909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.095 [2024-10-07 09:52:08.704134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.095 [2024-10-07 09:52:08.704352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.095 [2024-10-07 09:52:08.704362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.095 [2024-10-07 09:52:08.704370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.095 [2024-10-07 09:52:08.707878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.095 [2024-10-07 09:52:08.717149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.095 [2024-10-07 09:52:08.717739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.095 [2024-10-07 09:52:08.717803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.095 [2024-10-07 09:52:08.717818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.095 [2024-10-07 09:52:08.718071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.095 [2024-10-07 09:52:08.718294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.095 [2024-10-07 09:52:08.718306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.095 [2024-10-07 09:52:08.718314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.095 [2024-10-07 09:52:08.721490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.095 [2024-10-07 09:52:08.729848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.095 [2024-10-07 09:52:08.730388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.095 [2024-10-07 09:52:08.730412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.095 [2024-10-07 09:52:08.730418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.095 [2024-10-07 09:52:08.730571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.095 [2024-10-07 09:52:08.730733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.095 [2024-10-07 09:52:08.730740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.095 [2024-10-07 09:52:08.730746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.095 [2024-10-07 09:52:08.733152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.095 [2024-10-07 09:52:08.742490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.095 [2024-10-07 09:52:08.742970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.095 [2024-10-07 09:52:08.742988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.095 [2024-10-07 09:52:08.742995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.095 [2024-10-07 09:52:08.743147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.095 [2024-10-07 09:52:08.743297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.095 [2024-10-07 09:52:08.743303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.095 [2024-10-07 09:52:08.743308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.095 [2024-10-07 09:52:08.745726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.357 [2024-10-07 09:52:08.755075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.357 [2024-10-07 09:52:08.755588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.357 [2024-10-07 09:52:08.755605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.357 [2024-10-07 09:52:08.755612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.755768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.755918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.358 [2024-10-07 09:52:08.755925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.358 [2024-10-07 09:52:08.755930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.358 [2024-10-07 09:52:08.758331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.358 [2024-10-07 09:52:08.767662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.358 [2024-10-07 09:52:08.768157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.358 [2024-10-07 09:52:08.768173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.358 [2024-10-07 09:52:08.768185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.768335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.768485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.358 [2024-10-07 09:52:08.768492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.358 [2024-10-07 09:52:08.768497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.358 [2024-10-07 09:52:08.770906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.358 [2024-10-07 09:52:08.780365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.358 [2024-10-07 09:52:08.780941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.358 [2024-10-07 09:52:08.780957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.358 [2024-10-07 09:52:08.780963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.781113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.781263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.358 [2024-10-07 09:52:08.781269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.358 [2024-10-07 09:52:08.781274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.358 [2024-10-07 09:52:08.783681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.358 [2024-10-07 09:52:08.793019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.358 [2024-10-07 09:52:08.793512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.358 [2024-10-07 09:52:08.793526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.358 [2024-10-07 09:52:08.793532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.793688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.793838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.358 [2024-10-07 09:52:08.793843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.358 [2024-10-07 09:52:08.793849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.358 [2024-10-07 09:52:08.796247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.358 [2024-10-07 09:52:08.805706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.358 [2024-10-07 09:52:08.806199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.358 [2024-10-07 09:52:08.806213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.358 [2024-10-07 09:52:08.806219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.806367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.806516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.358 [2024-10-07 09:52:08.806526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.358 [2024-10-07 09:52:08.806531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.358 [2024-10-07 09:52:08.808931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.358 [2024-10-07 09:52:08.818372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.358 [2024-10-07 09:52:08.818826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.358 [2024-10-07 09:52:08.818840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.358 [2024-10-07 09:52:08.818846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.818994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.819143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.358 [2024-10-07 09:52:08.819149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.358 [2024-10-07 09:52:08.819154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.358 [2024-10-07 09:52:08.821547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.358 [2024-10-07 09:52:08.830997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.358 [2024-10-07 09:52:08.831456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.358 [2024-10-07 09:52:08.831469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.358 [2024-10-07 09:52:08.831475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.831629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.831779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.358 [2024-10-07 09:52:08.831784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.358 [2024-10-07 09:52:08.831789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.358 [2024-10-07 09:52:08.834181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.358 [2024-10-07 09:52:08.843630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.358 [2024-10-07 09:52:08.844097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.358 [2024-10-07 09:52:08.844109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.358 [2024-10-07 09:52:08.844115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.844264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.844412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.358 [2024-10-07 09:52:08.844418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.358 [2024-10-07 09:52:08.844423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.358 [2024-10-07 09:52:08.846819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.358 [2024-10-07 09:52:08.856262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.358 [2024-10-07 09:52:08.856714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.358 [2024-10-07 09:52:08.856726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.358 [2024-10-07 09:52:08.856732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.856880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.857029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.358 [2024-10-07 09:52:08.857035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.358 [2024-10-07 09:52:08.857040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.358 [2024-10-07 09:52:08.859425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.358 [2024-10-07 09:52:08.868888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.358 [2024-10-07 09:52:08.869376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.358 [2024-10-07 09:52:08.869389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.358 [2024-10-07 09:52:08.869395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.869543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.869696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.358 [2024-10-07 09:52:08.869702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.358 [2024-10-07 09:52:08.869707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.358 [2024-10-07 09:52:08.872095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.358 [2024-10-07 09:52:08.881485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.358 [2024-10-07 09:52:08.881996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.358 [2024-10-07 09:52:08.882010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.358 [2024-10-07 09:52:08.882015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.358 [2024-10-07 09:52:08.882163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.358 [2024-10-07 09:52:08.882311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.359 [2024-10-07 09:52:08.882317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.359 [2024-10-07 09:52:08.882322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.359 [2024-10-07 09:52:08.884712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.359 [2024-10-07 09:52:08.894151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.359 [2024-10-07 09:52:08.894635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.359 [2024-10-07 09:52:08.894647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.359 [2024-10-07 09:52:08.894653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.359 [2024-10-07 09:52:08.894804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.359 [2024-10-07 09:52:08.894952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.359 [2024-10-07 09:52:08.894958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.359 [2024-10-07 09:52:08.894963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.359 [2024-10-07 09:52:08.897352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.359 [2024-10-07 09:52:08.906789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.359 [2024-10-07 09:52:08.907236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.359 [2024-10-07 09:52:08.907248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.359 [2024-10-07 09:52:08.907254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.359 [2024-10-07 09:52:08.907402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.359 [2024-10-07 09:52:08.907550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.359 [2024-10-07 09:52:08.907556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.359 [2024-10-07 09:52:08.907561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.359 [2024-10-07 09:52:08.909952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.359 [2024-10-07 09:52:08.919385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.359 [2024-10-07 09:52:08.919883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.359 [2024-10-07 09:52:08.919895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.359 [2024-10-07 09:52:08.919900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.359 [2024-10-07 09:52:08.920049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.359 [2024-10-07 09:52:08.920196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.359 [2024-10-07 09:52:08.920202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.359 [2024-10-07 09:52:08.920207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.359 [2024-10-07 09:52:08.922593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.359 [2024-10-07 09:52:08.932031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.359 [2024-10-07 09:52:08.932513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.359 [2024-10-07 09:52:08.932524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.359 [2024-10-07 09:52:08.932530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.359 [2024-10-07 09:52:08.932682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.359 [2024-10-07 09:52:08.932830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.359 [2024-10-07 09:52:08.932836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.359 [2024-10-07 09:52:08.932844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.359 [2024-10-07 09:52:08.935233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.359 [2024-10-07 09:52:08.944676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.359 [2024-10-07 09:52:08.945134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.359 [2024-10-07 09:52:08.945146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.359 [2024-10-07 09:52:08.945152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.359 [2024-10-07 09:52:08.945300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.359 [2024-10-07 09:52:08.945448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.359 [2024-10-07 09:52:08.945454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.359 [2024-10-07 09:52:08.945459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.359 [2024-10-07 09:52:08.947850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.359 [2024-10-07 09:52:08.957284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.359 [2024-10-07 09:52:08.957794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.359 [2024-10-07 09:52:08.957807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.359 [2024-10-07 09:52:08.957812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.359 [2024-10-07 09:52:08.957960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.359 [2024-10-07 09:52:08.958108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.359 [2024-10-07 09:52:08.958114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.359 [2024-10-07 09:52:08.958118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.359 [2024-10-07 09:52:08.960505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.359 [2024-10-07 09:52:08.969942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.359 [2024-10-07 09:52:08.970385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.359 [2024-10-07 09:52:08.970396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.359 [2024-10-07 09:52:08.970401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.359 [2024-10-07 09:52:08.970549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.359 [2024-10-07 09:52:08.970702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.359 [2024-10-07 09:52:08.970708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.359 [2024-10-07 09:52:08.970713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.359 [2024-10-07 09:52:08.973099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.359 [2024-10-07 09:52:08.982529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.359 [2024-10-07 09:52:08.982997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.359 [2024-10-07 09:52:08.983009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.359 [2024-10-07 09:52:08.983015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.359 [2024-10-07 09:52:08.983163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.359 [2024-10-07 09:52:08.983311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.359 [2024-10-07 09:52:08.983316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.359 [2024-10-07 09:52:08.983321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.359 [2024-10-07 09:52:08.985712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.359 [2024-10-07 09:52:08.995152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.359 [2024-10-07 09:52:08.995634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.359 [2024-10-07 09:52:08.995646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.359 [2024-10-07 09:52:08.995651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.359 [2024-10-07 09:52:08.995799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.359 [2024-10-07 09:52:08.995947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.359 [2024-10-07 09:52:08.995952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.359 [2024-10-07 09:52:08.995957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.359 [2024-10-07 09:52:08.998343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.359 [2024-10-07 09:52:09.007778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.359 [2024-10-07 09:52:09.008229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.359 [2024-10-07 09:52:09.008240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.359 [2024-10-07 09:52:09.008246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.359 [2024-10-07 09:52:09.008394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.359 [2024-10-07 09:52:09.008542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.360 [2024-10-07 09:52:09.008548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.360 [2024-10-07 09:52:09.008553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.360 [2024-10-07 09:52:09.010947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.622 [2024-10-07 09:52:09.020382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.622 [2024-10-07 09:52:09.020757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-10-07 09:52:09.020769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.622 [2024-10-07 09:52:09.020775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.622 [2024-10-07 09:52:09.020923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.622 [2024-10-07 09:52:09.021074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.622 [2024-10-07 09:52:09.021080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.622 [2024-10-07 09:52:09.021085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.622 [2024-10-07 09:52:09.023474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.622 [2024-10-07 09:52:09.033050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.622 [2024-10-07 09:52:09.033530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-10-07 09:52:09.033541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.622 [2024-10-07 09:52:09.033546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.622 [2024-10-07 09:52:09.033698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.622 [2024-10-07 09:52:09.033847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.622 [2024-10-07 09:52:09.033852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.622 [2024-10-07 09:52:09.033857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.622 [2024-10-07 09:52:09.036247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.622 [2024-10-07 09:52:09.045692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.622 [2024-10-07 09:52:09.046139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-10-07 09:52:09.046151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.622 [2024-10-07 09:52:09.046157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.622 [2024-10-07 09:52:09.046305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.622 [2024-10-07 09:52:09.046453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.622 [2024-10-07 09:52:09.046458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.622 [2024-10-07 09:52:09.046463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.622 [2024-10-07 09:52:09.048856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.622 [2024-10-07 09:52:09.058294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.622 [2024-10-07 09:52:09.058663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-10-07 09:52:09.058675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.622 [2024-10-07 09:52:09.058681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.622 [2024-10-07 09:52:09.058829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.622 [2024-10-07 09:52:09.058977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.622 [2024-10-07 09:52:09.058984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.622 [2024-10-07 09:52:09.058991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.061384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.070966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.623 [2024-10-07 09:52:09.071446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-10-07 09:52:09.071458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.623 [2024-10-07 09:52:09.071463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.623 [2024-10-07 09:52:09.071611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.623 [2024-10-07 09:52:09.071765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.623 [2024-10-07 09:52:09.071772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.623 [2024-10-07 09:52:09.071777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.074166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.083609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.623 [2024-10-07 09:52:09.083942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-10-07 09:52:09.083955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.623 [2024-10-07 09:52:09.083960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.623 [2024-10-07 09:52:09.084108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.623 [2024-10-07 09:52:09.084256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.623 [2024-10-07 09:52:09.084263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.623 [2024-10-07 09:52:09.084269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.086662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.096247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.623 [2024-10-07 09:52:09.096715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-10-07 09:52:09.096727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.623 [2024-10-07 09:52:09.096732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.623 [2024-10-07 09:52:09.096880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.623 [2024-10-07 09:52:09.097028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.623 [2024-10-07 09:52:09.097033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.623 [2024-10-07 09:52:09.097038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.099428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.108867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.623 [2024-10-07 09:52:09.109258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-10-07 09:52:09.109273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.623 [2024-10-07 09:52:09.109278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.623 [2024-10-07 09:52:09.109426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.623 [2024-10-07 09:52:09.109573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.623 [2024-10-07 09:52:09.109579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.623 [2024-10-07 09:52:09.109584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.111980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.121561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.623 [2024-10-07 09:52:09.122014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-10-07 09:52:09.122026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.623 [2024-10-07 09:52:09.122031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.623 [2024-10-07 09:52:09.122179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.623 [2024-10-07 09:52:09.122326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.623 [2024-10-07 09:52:09.122332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.623 [2024-10-07 09:52:09.122337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.124732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.134166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.623 [2024-10-07 09:52:09.134639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-10-07 09:52:09.134651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.623 [2024-10-07 09:52:09.134657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.623 [2024-10-07 09:52:09.134805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.623 [2024-10-07 09:52:09.134953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.623 [2024-10-07 09:52:09.134958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.623 [2024-10-07 09:52:09.134963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.137350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.146794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.623 [2024-10-07 09:52:09.147237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-10-07 09:52:09.147249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.623 [2024-10-07 09:52:09.147254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.623 [2024-10-07 09:52:09.147402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.623 [2024-10-07 09:52:09.147556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.623 [2024-10-07 09:52:09.147562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.623 [2024-10-07 09:52:09.147567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.149959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.159394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.623 [2024-10-07 09:52:09.159856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-10-07 09:52:09.159868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.623 [2024-10-07 09:52:09.159873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.623 [2024-10-07 09:52:09.160021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.623 [2024-10-07 09:52:09.160169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.623 [2024-10-07 09:52:09.160174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.623 [2024-10-07 09:52:09.160179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.162567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.172007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.623 [2024-10-07 09:52:09.172490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-10-07 09:52:09.172502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.623 [2024-10-07 09:52:09.172507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.623 [2024-10-07 09:52:09.172659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.623 [2024-10-07 09:52:09.172807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.623 [2024-10-07 09:52:09.172813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.623 [2024-10-07 09:52:09.172818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.175204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.184643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.623 [2024-10-07 09:52:09.185182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-10-07 09:52:09.185213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.623 [2024-10-07 09:52:09.185222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.623 [2024-10-07 09:52:09.185386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.623 [2024-10-07 09:52:09.185537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.623 [2024-10-07 09:52:09.185543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.623 [2024-10-07 09:52:09.185548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.623 [2024-10-07 09:52:09.187946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.623 [2024-10-07 09:52:09.197253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.624 [2024-10-07 09:52:09.197757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-10-07 09:52:09.197788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.624 [2024-10-07 09:52:09.197797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.624 [2024-10-07 09:52:09.197964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.624 [2024-10-07 09:52:09.198115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.624 [2024-10-07 09:52:09.198121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.624 [2024-10-07 09:52:09.198127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.624 [2024-10-07 09:52:09.200526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.624 [2024-10-07 09:52:09.209824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.624 [2024-10-07 09:52:09.210304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-10-07 09:52:09.210318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.624 [2024-10-07 09:52:09.210324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.624 [2024-10-07 09:52:09.210472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.624 [2024-10-07 09:52:09.210624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.624 [2024-10-07 09:52:09.210631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.624 [2024-10-07 09:52:09.210636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.624 [2024-10-07 09:52:09.213025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.624 [2024-10-07 09:52:09.222451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.624 [2024-10-07 09:52:09.223050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-10-07 09:52:09.223081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.624 [2024-10-07 09:52:09.223090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.624 [2024-10-07 09:52:09.223254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.624 [2024-10-07 09:52:09.223405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.624 [2024-10-07 09:52:09.223411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.624 [2024-10-07 09:52:09.223416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.624 [2024-10-07 09:52:09.225815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.624 [2024-10-07 09:52:09.235108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.624 [2024-10-07 09:52:09.235585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-10-07 09:52:09.235614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.624 [2024-10-07 09:52:09.235632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.624 [2024-10-07 09:52:09.235799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.624 [2024-10-07 09:52:09.235950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.624 [2024-10-07 09:52:09.235956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.624 [2024-10-07 09:52:09.235962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.624 [2024-10-07 09:52:09.238354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.624 [2024-10-07 09:52:09.247797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.624 [2024-10-07 09:52:09.248368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-10-07 09:52:09.248399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.624 [2024-10-07 09:52:09.248408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.624 [2024-10-07 09:52:09.248571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.624 [2024-10-07 09:52:09.248728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.624 [2024-10-07 09:52:09.248735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.624 [2024-10-07 09:52:09.248741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.624 [2024-10-07 09:52:09.251132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.624 [2024-10-07 09:52:09.260424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.624 [2024-10-07 09:52:09.260981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-10-07 09:52:09.261012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.624 [2024-10-07 09:52:09.261021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.624 [2024-10-07 09:52:09.261184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.624 [2024-10-07 09:52:09.261335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.624 [2024-10-07 09:52:09.261342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.624 [2024-10-07 09:52:09.261347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.624 [2024-10-07 09:52:09.263747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.624 [2024-10-07 09:52:09.273066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.624 [2024-10-07 09:52:09.273552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-10-07 09:52:09.273567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.624 [2024-10-07 09:52:09.273573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.624 [2024-10-07 09:52:09.273726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.624 [2024-10-07 09:52:09.273875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.624 [2024-10-07 09:52:09.273884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.624 [2024-10-07 09:52:09.273889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.624 [2024-10-07 09:52:09.276277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.948 [2024-10-07 09:52:09.285708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.948 [2024-10-07 09:52:09.286175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.948 [2024-10-07 09:52:09.286187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.948 [2024-10-07 09:52:09.286192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.948 [2024-10-07 09:52:09.286340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.948 [2024-10-07 09:52:09.286488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.948 [2024-10-07 09:52:09.286494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.948 [2024-10-07 09:52:09.286498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.948 [2024-10-07 09:52:09.288896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.298322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.949 [2024-10-07 09:52:09.298886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.949 [2024-10-07 09:52:09.298917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.949 [2024-10-07 09:52:09.298926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.949 [2024-10-07 09:52:09.299090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.949 [2024-10-07 09:52:09.299241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.949 [2024-10-07 09:52:09.299248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.949 [2024-10-07 09:52:09.299253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.949 [2024-10-07 09:52:09.301649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.310940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.949 [2024-10-07 09:52:09.311316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.949 [2024-10-07 09:52:09.311331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.949 [2024-10-07 09:52:09.311337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.949 [2024-10-07 09:52:09.311485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.949 [2024-10-07 09:52:09.311638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.949 [2024-10-07 09:52:09.311644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.949 [2024-10-07 09:52:09.311649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.949 [2024-10-07 09:52:09.314040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.323603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.949 [2024-10-07 09:52:09.324122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.949 [2024-10-07 09:52:09.324152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.949 [2024-10-07 09:52:09.324161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.949 [2024-10-07 09:52:09.324324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.949 [2024-10-07 09:52:09.324476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.949 [2024-10-07 09:52:09.324482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.949 [2024-10-07 09:52:09.324488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.949 [2024-10-07 09:52:09.326887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.336177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.949 [2024-10-07 09:52:09.336654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.949 [2024-10-07 09:52:09.336668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.949 [2024-10-07 09:52:09.336674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.949 [2024-10-07 09:52:09.336823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.949 [2024-10-07 09:52:09.336974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.949 [2024-10-07 09:52:09.336980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.949 [2024-10-07 09:52:09.336986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.949 7384.25 IOPS, 28.84 MiB/s [2024-10-07 09:52:09.340503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.348820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.949 [2024-10-07 09:52:09.349374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.949 [2024-10-07 09:52:09.349405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.949 [2024-10-07 09:52:09.349414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.949 [2024-10-07 09:52:09.349578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.949 [2024-10-07 09:52:09.349736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.949 [2024-10-07 09:52:09.349743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.949 [2024-10-07 09:52:09.349749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.949 [2024-10-07 09:52:09.352138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.361425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.949 [2024-10-07 09:52:09.361854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.949 [2024-10-07 09:52:09.361885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.949 [2024-10-07 09:52:09.361895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.949 [2024-10-07 09:52:09.362065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.949 [2024-10-07 09:52:09.362216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.949 [2024-10-07 09:52:09.362222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.949 [2024-10-07 09:52:09.362227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.949 [2024-10-07 09:52:09.364627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.374061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.949 [2024-10-07 09:52:09.374666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.949 [2024-10-07 09:52:09.374697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.949 [2024-10-07 09:52:09.374706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.949 [2024-10-07 09:52:09.374873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.949 [2024-10-07 09:52:09.375024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.949 [2024-10-07 09:52:09.375030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.949 [2024-10-07 09:52:09.375036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.949 [2024-10-07 09:52:09.377436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.386728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.949 [2024-10-07 09:52:09.387270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.949 [2024-10-07 09:52:09.387301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.949 [2024-10-07 09:52:09.387310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.949 [2024-10-07 09:52:09.387473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.949 [2024-10-07 09:52:09.387631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.949 [2024-10-07 09:52:09.387638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.949 [2024-10-07 09:52:09.387644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.949 [2024-10-07 09:52:09.390043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.399327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.949 [2024-10-07 09:52:09.399946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.949 [2024-10-07 09:52:09.399977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.949 [2024-10-07 09:52:09.399986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.949 [2024-10-07 09:52:09.400150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.949 [2024-10-07 09:52:09.400301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.949 [2024-10-07 09:52:09.400307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.949 [2024-10-07 09:52:09.400316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.949 [2024-10-07 09:52:09.402720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.412002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.949 [2024-10-07 09:52:09.412456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.949 [2024-10-07 09:52:09.412471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.949 [2024-10-07 09:52:09.412477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.949 [2024-10-07 09:52:09.412631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.949 [2024-10-07 09:52:09.412780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.949 [2024-10-07 09:52:09.412785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.949 [2024-10-07 09:52:09.412790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.949 [2024-10-07 09:52:09.415178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.949 [2024-10-07 09:52:09.424591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.425163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.425194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.950 [2024-10-07 09:52:09.425203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.950 [2024-10-07 09:52:09.425367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.950 [2024-10-07 09:52:09.425518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.950 [2024-10-07 09:52:09.425524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.950 [2024-10-07 09:52:09.425529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.950 [2024-10-07 09:52:09.427928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.950 [2024-10-07 09:52:09.437211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.437723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.437754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.950 [2024-10-07 09:52:09.437763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.950 [2024-10-07 09:52:09.437929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.950 [2024-10-07 09:52:09.438080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.950 [2024-10-07 09:52:09.438086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.950 [2024-10-07 09:52:09.438091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.950 [2024-10-07 09:52:09.440487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.950 [2024-10-07 09:52:09.449777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.450354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.450385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.950 [2024-10-07 09:52:09.450394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.950 [2024-10-07 09:52:09.450558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.950 [2024-10-07 09:52:09.450715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.950 [2024-10-07 09:52:09.450722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.950 [2024-10-07 09:52:09.450728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.950 [2024-10-07 09:52:09.453120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.950 [2024-10-07 09:52:09.462405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.462986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.463017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.950 [2024-10-07 09:52:09.463026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.950 [2024-10-07 09:52:09.463190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.950 [2024-10-07 09:52:09.463340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.950 [2024-10-07 09:52:09.463346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.950 [2024-10-07 09:52:09.463352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.950 [2024-10-07 09:52:09.465749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.950 [2024-10-07 09:52:09.475029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.475573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.475604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.950 [2024-10-07 09:52:09.475613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.950 [2024-10-07 09:52:09.475783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.950 [2024-10-07 09:52:09.475934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.950 [2024-10-07 09:52:09.475941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.950 [2024-10-07 09:52:09.475946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.950 [2024-10-07 09:52:09.478338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.950 [2024-10-07 09:52:09.487637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.488180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.488211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.950 [2024-10-07 09:52:09.488219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.950 [2024-10-07 09:52:09.488386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.950 [2024-10-07 09:52:09.488537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.950 [2024-10-07 09:52:09.488544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.950 [2024-10-07 09:52:09.488549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.950 [2024-10-07 09:52:09.490955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.950 [2024-10-07 09:52:09.500242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.500818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.500850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.950 [2024-10-07 09:52:09.500859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.950 [2024-10-07 09:52:09.501023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.950 [2024-10-07 09:52:09.501174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.950 [2024-10-07 09:52:09.501181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.950 [2024-10-07 09:52:09.501186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.950 [2024-10-07 09:52:09.503580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.950 [2024-10-07 09:52:09.512869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.513357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.513371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.950 [2024-10-07 09:52:09.513377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.950 [2024-10-07 09:52:09.513526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.950 [2024-10-07 09:52:09.513679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.950 [2024-10-07 09:52:09.513685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.950 [2024-10-07 09:52:09.513690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.950 [2024-10-07 09:52:09.516076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.950 [2024-10-07 09:52:09.525496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.525965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.525978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.950 [2024-10-07 09:52:09.525984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.950 [2024-10-07 09:52:09.526132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.950 [2024-10-07 09:52:09.526280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.950 [2024-10-07 09:52:09.526286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.950 [2024-10-07 09:52:09.526291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.950 [2024-10-07 09:52:09.528816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.950 [2024-10-07 09:52:09.538108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.538749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.538780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.950 [2024-10-07 09:52:09.538789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.950 [2024-10-07 09:52:09.538953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.950 [2024-10-07 09:52:09.539104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.950 [2024-10-07 09:52:09.539110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.950 [2024-10-07 09:52:09.539115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.950 [2024-10-07 09:52:09.541511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.950 [2024-10-07 09:52:09.550675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.950 [2024-10-07 09:52:09.551171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.950 [2024-10-07 09:52:09.551185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.951 [2024-10-07 09:52:09.551191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.951 [2024-10-07 09:52:09.551339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.951 [2024-10-07 09:52:09.551487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.951 [2024-10-07 09:52:09.551493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.951 [2024-10-07 09:52:09.551498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.951 [2024-10-07 09:52:09.553892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.951 [2024-10-07 09:52:09.563314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.951 [2024-10-07 09:52:09.563725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.951 [2024-10-07 09:52:09.563756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.951 [2024-10-07 09:52:09.563765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.951 [2024-10-07 09:52:09.563931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.951 [2024-10-07 09:52:09.564081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.951 [2024-10-07 09:52:09.564088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.951 [2024-10-07 09:52:09.564093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.951 [2024-10-07 09:52:09.566490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.951 [2024-10-07 09:52:09.575924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.951 [2024-10-07 09:52:09.576265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.951 [2024-10-07 09:52:09.576286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.951 [2024-10-07 09:52:09.576292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.951 [2024-10-07 09:52:09.576443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.951 [2024-10-07 09:52:09.576592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.951 [2024-10-07 09:52:09.576598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.951 [2024-10-07 09:52:09.576604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.951 [2024-10-07 09:52:09.579001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.951 [2024-10-07 09:52:09.588570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.951 [2024-10-07 09:52:09.589143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.951 [2024-10-07 09:52:09.589174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.951 [2024-10-07 09:52:09.589183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.951 [2024-10-07 09:52:09.589347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.951 [2024-10-07 09:52:09.589498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.951 [2024-10-07 09:52:09.589504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.951 [2024-10-07 09:52:09.589509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.951 [2024-10-07 09:52:09.591907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.951 [2024-10-07 09:52:09.601198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.951 [2024-10-07 09:52:09.601746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.951 [2024-10-07 09:52:09.601777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:09.951 [2024-10-07 09:52:09.601786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:09.951 [2024-10-07 09:52:09.601952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:09.951 [2024-10-07 09:52:09.602103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.951 [2024-10-07 09:52:09.602110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.951 [2024-10-07 09:52:09.602115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.951 [2024-10-07 09:52:09.604510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.334 [2024-10-07 09:52:09.613803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.334 [2024-10-07 09:52:09.614362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.334 [2024-10-07 09:52:09.614393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.335 [2024-10-07 09:52:09.614402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.335 [2024-10-07 09:52:09.614566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.335 [2024-10-07 09:52:09.614729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.335 [2024-10-07 09:52:09.614736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.335 [2024-10-07 09:52:09.614741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.335 [2024-10-07 09:52:09.617134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.335 [2024-10-07 09:52:09.626429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.335 [2024-10-07 09:52:09.626987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.335 [2024-10-07 09:52:09.627018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.335 [2024-10-07 09:52:09.627027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.335 [2024-10-07 09:52:09.627191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.335 [2024-10-07 09:52:09.627342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.335 [2024-10-07 09:52:09.627348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.335 [2024-10-07 09:52:09.627353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.335 [2024-10-07 09:52:09.629753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.335 [2024-10-07 09:52:09.639045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.335 [2024-10-07 09:52:09.639537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.335 [2024-10-07 09:52:09.639552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.335 [2024-10-07 09:52:09.639557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.335 [2024-10-07 09:52:09.639710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.335 [2024-10-07 09:52:09.639858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.335 [2024-10-07 09:52:09.639864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.335 [2024-10-07 09:52:09.639869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.335 [2024-10-07 09:52:09.642255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.335 [2024-10-07 09:52:09.651692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.335 [2024-10-07 09:52:09.652154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.335 [2024-10-07 09:52:09.652167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.335 [2024-10-07 09:52:09.652173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.335 [2024-10-07 09:52:09.652321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.335 [2024-10-07 09:52:09.652469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.335 [2024-10-07 09:52:09.652474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.335 [2024-10-07 09:52:09.652479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.335 [2024-10-07 09:52:09.654872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.335 [2024-10-07 09:52:09.664302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.335 [2024-10-07 09:52:09.664857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.335 [2024-10-07 09:52:09.664888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.335 [2024-10-07 09:52:09.664897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.335 [2024-10-07 09:52:09.665061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.335 [2024-10-07 09:52:09.665212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.335 [2024-10-07 09:52:09.665218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.335 [2024-10-07 09:52:09.665224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.335 [2024-10-07 09:52:09.667622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.335 [2024-10-07 09:52:09.676915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.335 [2024-10-07 09:52:09.677252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.335 [2024-10-07 09:52:09.677267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.335 [2024-10-07 09:52:09.677272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.335 [2024-10-07 09:52:09.677421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.335 [2024-10-07 09:52:09.677569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.335 [2024-10-07 09:52:09.677574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.335 [2024-10-07 09:52:09.677580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.335 [2024-10-07 09:52:09.679971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.335 [2024-10-07 09:52:09.689537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.335 [2024-10-07 09:52:09.690146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.335 [2024-10-07 09:52:09.690177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.335 [2024-10-07 09:52:09.690186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.335 [2024-10-07 09:52:09.690349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.335 [2024-10-07 09:52:09.690500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.335 [2024-10-07 09:52:09.690506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.335 [2024-10-07 09:52:09.690512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.335 [2024-10-07 09:52:09.692911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.335 [2024-10-07 09:52:09.702191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.335 [2024-10-07 09:52:09.702694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.335 [2024-10-07 09:52:09.702724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.335 [2024-10-07 09:52:09.702737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.335 [2024-10-07 09:52:09.702903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.335 [2024-10-07 09:52:09.703054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.335 [2024-10-07 09:52:09.703061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.335 [2024-10-07 09:52:09.703066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.335 [2024-10-07 09:52:09.705461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.335 [2024-10-07 09:52:09.714885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.335 [2024-10-07 09:52:09.715469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.335 [2024-10-07 09:52:09.715500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.335 [2024-10-07 09:52:09.715509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.335 [2024-10-07 09:52:09.715679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.335 [2024-10-07 09:52:09.715831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.335 [2024-10-07 09:52:09.715837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.335 [2024-10-07 09:52:09.715843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.335 [2024-10-07 09:52:09.718232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.335 [2024-10-07 09:52:09.727513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.335 [2024-10-07 09:52:09.728094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.335 [2024-10-07 09:52:09.728125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.728134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.728297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.336 [2024-10-07 09:52:09.728448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.336 [2024-10-07 09:52:09.728455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.336 [2024-10-07 09:52:09.728460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.336 [2024-10-07 09:52:09.730859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.336 [2024-10-07 09:52:09.740144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.336 [2024-10-07 09:52:09.740735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.336 [2024-10-07 09:52:09.740766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.740775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.740942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.336 [2024-10-07 09:52:09.741093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.336 [2024-10-07 09:52:09.741102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.336 [2024-10-07 09:52:09.741108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.336 [2024-10-07 09:52:09.743508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.336 [2024-10-07 09:52:09.752805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.336 [2024-10-07 09:52:09.753375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.336 [2024-10-07 09:52:09.753406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.753414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.753578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.336 [2024-10-07 09:52:09.753736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.336 [2024-10-07 09:52:09.753743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.336 [2024-10-07 09:52:09.753749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.336 [2024-10-07 09:52:09.756141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.336 [2024-10-07 09:52:09.765421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.336 [2024-10-07 09:52:09.765969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.336 [2024-10-07 09:52:09.766000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.766009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.766173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.336 [2024-10-07 09:52:09.766324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.336 [2024-10-07 09:52:09.766330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.336 [2024-10-07 09:52:09.766335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.336 [2024-10-07 09:52:09.768734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.336 [2024-10-07 09:52:09.778016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.336 [2024-10-07 09:52:09.778584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.336 [2024-10-07 09:52:09.778614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.778631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.778797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.336 [2024-10-07 09:52:09.778948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.336 [2024-10-07 09:52:09.778954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.336 [2024-10-07 09:52:09.778960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.336 [2024-10-07 09:52:09.781352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.336 [2024-10-07 09:52:09.790641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.336 [2024-10-07 09:52:09.791233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.336 [2024-10-07 09:52:09.791264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.791273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.791436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.336 [2024-10-07 09:52:09.791587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.336 [2024-10-07 09:52:09.791594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.336 [2024-10-07 09:52:09.791599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.336 [2024-10-07 09:52:09.793999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.336 [2024-10-07 09:52:09.803281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.336 [2024-10-07 09:52:09.803936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.336 [2024-10-07 09:52:09.803966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.803975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.804139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.336 [2024-10-07 09:52:09.804289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.336 [2024-10-07 09:52:09.804296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.336 [2024-10-07 09:52:09.804301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.336 [2024-10-07 09:52:09.806697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.336 [2024-10-07 09:52:09.815838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.336 [2024-10-07 09:52:09.816319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.336 [2024-10-07 09:52:09.816349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.816358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.816524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.336 [2024-10-07 09:52:09.816682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.336 [2024-10-07 09:52:09.816689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.336 [2024-10-07 09:52:09.816695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.336 [2024-10-07 09:52:09.819086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.336 [2024-10-07 09:52:09.828509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.336 [2024-10-07 09:52:09.829028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.336 [2024-10-07 09:52:09.829059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.829068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.829235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.336 [2024-10-07 09:52:09.829386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.336 [2024-10-07 09:52:09.829392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.336 [2024-10-07 09:52:09.829398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.336 [2024-10-07 09:52:09.831795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.336 [2024-10-07 09:52:09.841078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.336 [2024-10-07 09:52:09.841638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.336 [2024-10-07 09:52:09.841669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.841678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.841842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.336 [2024-10-07 09:52:09.841993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.336 [2024-10-07 09:52:09.841999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.336 [2024-10-07 09:52:09.842004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.336 [2024-10-07 09:52:09.844409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.336 [2024-10-07 09:52:09.853698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.336 [2024-10-07 09:52:09.854192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.336 [2024-10-07 09:52:09.854207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.336 [2024-10-07 09:52:09.854213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.336 [2024-10-07 09:52:09.854362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.337 [2024-10-07 09:52:09.854510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.337 [2024-10-07 09:52:09.854515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.337 [2024-10-07 09:52:09.854520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.337 [2024-10-07 09:52:09.856915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.337 [2024-10-07 09:52:09.866348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.337 [2024-10-07 09:52:09.866889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.337 [2024-10-07 09:52:09.866920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.337 [2024-10-07 09:52:09.866929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.337 [2024-10-07 09:52:09.867093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.337 [2024-10-07 09:52:09.867244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.337 [2024-10-07 09:52:09.867250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.337 [2024-10-07 09:52:09.867259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.337 [2024-10-07 09:52:09.869657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.337 [2024-10-07 09:52:09.878946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.337 [2024-10-07 09:52:09.879496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.337 [2024-10-07 09:52:09.879526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.337 [2024-10-07 09:52:09.879535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.337 [2024-10-07 09:52:09.879704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.337 [2024-10-07 09:52:09.879856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.337 [2024-10-07 09:52:09.879862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.337 [2024-10-07 09:52:09.879868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.337 [2024-10-07 09:52:09.882261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.337 [2024-10-07 09:52:09.891550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.337 [2024-10-07 09:52:09.892020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.337 [2024-10-07 09:52:09.892035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.337 [2024-10-07 09:52:09.892040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.337 [2024-10-07 09:52:09.892189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.337 [2024-10-07 09:52:09.892337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.337 [2024-10-07 09:52:09.892343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.337 [2024-10-07 09:52:09.892348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.337 [2024-10-07 09:52:09.894738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.337 [2024-10-07 09:52:09.904254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.337 [2024-10-07 09:52:09.904759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.337 [2024-10-07 09:52:09.904773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.337 [2024-10-07 09:52:09.904778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.337 [2024-10-07 09:52:09.904927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.337 [2024-10-07 09:52:09.905075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.337 [2024-10-07 09:52:09.905080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.337 [2024-10-07 09:52:09.905085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.337 [2024-10-07 09:52:09.907470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.337 [2024-10-07 09:52:09.916898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.337 [2024-10-07 09:52:09.917469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.337 [2024-10-07 09:52:09.917500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.337 [2024-10-07 09:52:09.917509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.337 [2024-10-07 09:52:09.917679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.337 [2024-10-07 09:52:09.917831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.337 [2024-10-07 09:52:09.917837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.337 [2024-10-07 09:52:09.917843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.337 [2024-10-07 09:52:09.920233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.337 [2024-10-07 09:52:09.929518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.337 [2024-10-07 09:52:09.930063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.337 [2024-10-07 09:52:09.930094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.337 [2024-10-07 09:52:09.930103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.337 [2024-10-07 09:52:09.930266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.337 [2024-10-07 09:52:09.930417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.337 [2024-10-07 09:52:09.930423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.337 [2024-10-07 09:52:09.930429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.337 [2024-10-07 09:52:09.932827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.337 [2024-10-07 09:52:09.942108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.337 [2024-10-07 09:52:09.942608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.337 [2024-10-07 09:52:09.942627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.337 [2024-10-07 09:52:09.942633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.337 [2024-10-07 09:52:09.942782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.337 [2024-10-07 09:52:09.942929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.337 [2024-10-07 09:52:09.942935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.337 [2024-10-07 09:52:09.942940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.337 [2024-10-07 09:52:09.945343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.337 [2024-10-07 09:52:09.954776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.337 [2024-10-07 09:52:09.955105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.337 [2024-10-07 09:52:09.955118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.337 [2024-10-07 09:52:09.955123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.337 [2024-10-07 09:52:09.955271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.337 [2024-10-07 09:52:09.955423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.337 [2024-10-07 09:52:09.955428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.337 [2024-10-07 09:52:09.955433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.337 [2024-10-07 09:52:09.957825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.337 [2024-10-07 09:52:09.967387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.338 [2024-10-07 09:52:09.967934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.338 [2024-10-07 09:52:09.967965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.338 [2024-10-07 09:52:09.967974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.338 [2024-10-07 09:52:09.968138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.338 [2024-10-07 09:52:09.968289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.338 [2024-10-07 09:52:09.968295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.338 [2024-10-07 09:52:09.968300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.338 [2024-10-07 09:52:09.970697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.338 [2024-10-07 09:52:09.979979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.338 [2024-10-07 09:52:09.980503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.338 [2024-10-07 09:52:09.980517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.338 [2024-10-07 09:52:09.980523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.338 [2024-10-07 09:52:09.980677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.338 [2024-10-07 09:52:09.980825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.338 [2024-10-07 09:52:09.980831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.338 [2024-10-07 09:52:09.980836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.338 [2024-10-07 09:52:09.983222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.338 [2024-10-07 09:52:09.992648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.338 [2024-10-07 09:52:09.993214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.338 [2024-10-07 09:52:09.993245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.338 [2024-10-07 09:52:09.993254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.338 [2024-10-07 09:52:09.993417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.338 [2024-10-07 09:52:09.993568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.338 [2024-10-07 09:52:09.993574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.338 [2024-10-07 09:52:09.993580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.601 [2024-10-07 09:52:09.995983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.601 [2024-10-07 09:52:10.005746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.601 [2024-10-07 09:52:10.006302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.601 [2024-10-07 09:52:10.006333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.601 [2024-10-07 09:52:10.006342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.601 [2024-10-07 09:52:10.006506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.601 [2024-10-07 09:52:10.006663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.601 [2024-10-07 09:52:10.006670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.601 [2024-10-07 09:52:10.006676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.601 [2024-10-07 09:52:10.009070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.601 [2024-10-07 09:52:10.018365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.601 [2024-10-07 09:52:10.018836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.601 [2024-10-07 09:52:10.018867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.601 [2024-10-07 09:52:10.018876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.601 [2024-10-07 09:52:10.019042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.601 [2024-10-07 09:52:10.019194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.601 [2024-10-07 09:52:10.019200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.601 [2024-10-07 09:52:10.019206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.601 [2024-10-07 09:52:10.021602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.601 [2024-10-07 09:52:10.031032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.601 [2024-10-07 09:52:10.031496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.601 [2024-10-07 09:52:10.031527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.601 [2024-10-07 09:52:10.031536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.601 [2024-10-07 09:52:10.031709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.601 [2024-10-07 09:52:10.031861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.601 [2024-10-07 09:52:10.031867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.601 [2024-10-07 09:52:10.031872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.601 [2024-10-07 09:52:10.034267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.601 [2024-10-07 09:52:10.043708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.601 [2024-10-07 09:52:10.044169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.601 [2024-10-07 09:52:10.044183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.601 [2024-10-07 09:52:10.044192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.601 [2024-10-07 09:52:10.044340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.601 [2024-10-07 09:52:10.044489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.601 [2024-10-07 09:52:10.044496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.601 [2024-10-07 09:52:10.044501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.601 [2024-10-07 09:52:10.046901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.601 [2024-10-07 09:52:10.056320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.601 [2024-10-07 09:52:10.056911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.601 [2024-10-07 09:52:10.056942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.601 [2024-10-07 09:52:10.056951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.601 [2024-10-07 09:52:10.057114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.601 [2024-10-07 09:52:10.057266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.601 [2024-10-07 09:52:10.057272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.601 [2024-10-07 09:52:10.057278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.601 [2024-10-07 09:52:10.059679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.601 [2024-10-07 09:52:10.068966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.601 [2024-10-07 09:52:10.069534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.601 [2024-10-07 09:52:10.069565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.601 [2024-10-07 09:52:10.069574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.601 [2024-10-07 09:52:10.069746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.601 [2024-10-07 09:52:10.069898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.601 [2024-10-07 09:52:10.069905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.601 [2024-10-07 09:52:10.069910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.601 [2024-10-07 09:52:10.072299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.601 [2024-10-07 09:52:10.081584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.601 [2024-10-07 09:52:10.082146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.601 [2024-10-07 09:52:10.082177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.601 [2024-10-07 09:52:10.082186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.601 [2024-10-07 09:52:10.082350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.601 [2024-10-07 09:52:10.082506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.601 [2024-10-07 09:52:10.082512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.601 [2024-10-07 09:52:10.082518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.084917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.094211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.094751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.602 [2024-10-07 09:52:10.094783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.602 [2024-10-07 09:52:10.094792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.602 [2024-10-07 09:52:10.094959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.602 [2024-10-07 09:52:10.095110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.602 [2024-10-07 09:52:10.095116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.602 [2024-10-07 09:52:10.095122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.097520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.106822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.107339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.602 [2024-10-07 09:52:10.107369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.602 [2024-10-07 09:52:10.107378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.602 [2024-10-07 09:52:10.107542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.602 [2024-10-07 09:52:10.107699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.602 [2024-10-07 09:52:10.107706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.602 [2024-10-07 09:52:10.107713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.110104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.119388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.119843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.602 [2024-10-07 09:52:10.119874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.602 [2024-10-07 09:52:10.119883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.602 [2024-10-07 09:52:10.120049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.602 [2024-10-07 09:52:10.120200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.602 [2024-10-07 09:52:10.120206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.602 [2024-10-07 09:52:10.120212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.122610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.132049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.132598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.602 [2024-10-07 09:52:10.132634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.602 [2024-10-07 09:52:10.132643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.602 [2024-10-07 09:52:10.132810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.602 [2024-10-07 09:52:10.132962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.602 [2024-10-07 09:52:10.132968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.602 [2024-10-07 09:52:10.132974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.135365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.144657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.145140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.602 [2024-10-07 09:52:10.145154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.602 [2024-10-07 09:52:10.145160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.602 [2024-10-07 09:52:10.145309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.602 [2024-10-07 09:52:10.145466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.602 [2024-10-07 09:52:10.145472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.602 [2024-10-07 09:52:10.145477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.147874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.157294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.157735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.602 [2024-10-07 09:52:10.157766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.602 [2024-10-07 09:52:10.157775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.602 [2024-10-07 09:52:10.157939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.602 [2024-10-07 09:52:10.158090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.602 [2024-10-07 09:52:10.158097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.602 [2024-10-07 09:52:10.158102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.160500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.169926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.170421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.602 [2024-10-07 09:52:10.170436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.602 [2024-10-07 09:52:10.170445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.602 [2024-10-07 09:52:10.170594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.602 [2024-10-07 09:52:10.170749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.602 [2024-10-07 09:52:10.170755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.602 [2024-10-07 09:52:10.170760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.173148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.182557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.182925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.602 [2024-10-07 09:52:10.182938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.602 [2024-10-07 09:52:10.182943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.602 [2024-10-07 09:52:10.183092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.602 [2024-10-07 09:52:10.183240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.602 [2024-10-07 09:52:10.183246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.602 [2024-10-07 09:52:10.183251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.185640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.195207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.195825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.602 [2024-10-07 09:52:10.195856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.602 [2024-10-07 09:52:10.195865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.602 [2024-10-07 09:52:10.196029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.602 [2024-10-07 09:52:10.196180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.602 [2024-10-07 09:52:10.196186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.602 [2024-10-07 09:52:10.196191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.198586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.207881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.208426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.602 [2024-10-07 09:52:10.208456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.602 [2024-10-07 09:52:10.208465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.602 [2024-10-07 09:52:10.208636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.602 [2024-10-07 09:52:10.208788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.602 [2024-10-07 09:52:10.208798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.602 [2024-10-07 09:52:10.208803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.602 [2024-10-07 09:52:10.211196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.602 [2024-10-07 09:52:10.220478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.602 [2024-10-07 09:52:10.221031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.603 [2024-10-07 09:52:10.221062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.603 [2024-10-07 09:52:10.221070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.603 [2024-10-07 09:52:10.221234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.603 [2024-10-07 09:52:10.221385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.603 [2024-10-07 09:52:10.221391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.603 [2024-10-07 09:52:10.221397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.603 [2024-10-07 09:52:10.223794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.603 [2024-10-07 09:52:10.233115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.603 [2024-10-07 09:52:10.233686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.603 [2024-10-07 09:52:10.233716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.603 [2024-10-07 09:52:10.233725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.603 [2024-10-07 09:52:10.233891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.603 [2024-10-07 09:52:10.234042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.603 [2024-10-07 09:52:10.234048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.603 [2024-10-07 09:52:10.234054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.603 [2024-10-07 09:52:10.236450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.603 [2024-10-07 09:52:10.245743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.603 [2024-10-07 09:52:10.246317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.603 [2024-10-07 09:52:10.246348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.603 [2024-10-07 09:52:10.246357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.603 [2024-10-07 09:52:10.246521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.603 [2024-10-07 09:52:10.246678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.603 [2024-10-07 09:52:10.246685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.603 [2024-10-07 09:52:10.246690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.603 [2024-10-07 09:52:10.249084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.603 [2024-10-07 09:52:10.258368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.603 [2024-10-07 09:52:10.258931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.603 [2024-10-07 09:52:10.258962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.603 [2024-10-07 09:52:10.258970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.603 [2024-10-07 09:52:10.259134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.603 [2024-10-07 09:52:10.259286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.603 [2024-10-07 09:52:10.259292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.603 [2024-10-07 09:52:10.259297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.865 [2024-10-07 09:52:10.261697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.865 [2024-10-07 09:52:10.270981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.865 [2024-10-07 09:52:10.271550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.865 [2024-10-07 09:52:10.271581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.865 [2024-10-07 09:52:10.271590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.865 [2024-10-07 09:52:10.271762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.865 [2024-10-07 09:52:10.271914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.865 [2024-10-07 09:52:10.271920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.865 [2024-10-07 09:52:10.271925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.865 [2024-10-07 09:52:10.274319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.865 [2024-10-07 09:52:10.283603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.865 [2024-10-07 09:52:10.284158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.865 [2024-10-07 09:52:10.284189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.865 [2024-10-07 09:52:10.284197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.865 [2024-10-07 09:52:10.284361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.865 [2024-10-07 09:52:10.284512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.865 [2024-10-07 09:52:10.284519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.865 [2024-10-07 09:52:10.284524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.865 [2024-10-07 09:52:10.286922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.865 [2024-10-07 09:52:10.296217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.865 [2024-10-07 09:52:10.296883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.865 [2024-10-07 09:52:10.296914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.865 [2024-10-07 09:52:10.296922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.865 [2024-10-07 09:52:10.297090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.865 [2024-10-07 09:52:10.297241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.865 [2024-10-07 09:52:10.297247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.865 [2024-10-07 09:52:10.297253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.865 [2024-10-07 09:52:10.299652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.865 [2024-10-07 09:52:10.308797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.865 [2024-10-07 09:52:10.309254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.865 [2024-10-07 09:52:10.309283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.865 [2024-10-07 09:52:10.309292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.865 [2024-10-07 09:52:10.309456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.865 [2024-10-07 09:52:10.309607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.865 [2024-10-07 09:52:10.309613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.865 [2024-10-07 09:52:10.309626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.865 [2024-10-07 09:52:10.312023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.865 [2024-10-07 09:52:10.321454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.865 [2024-10-07 09:52:10.322026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.865 [2024-10-07 09:52:10.322056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.865 [2024-10-07 09:52:10.322065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.865 [2024-10-07 09:52:10.322229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.865 [2024-10-07 09:52:10.322380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.865 [2024-10-07 09:52:10.322386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.865 [2024-10-07 09:52:10.322391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.865 [2024-10-07 09:52:10.324791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.865 [2024-10-07 09:52:10.334074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.865 [2024-10-07 09:52:10.334647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.865 [2024-10-07 09:52:10.334678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.865 [2024-10-07 09:52:10.334687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.865 [2024-10-07 09:52:10.334853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.865 [2024-10-07 09:52:10.335004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.865 [2024-10-07 09:52:10.335010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.865 [2024-10-07 09:52:10.335019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.865 [2024-10-07 09:52:10.337416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.865 5907.40 IOPS, 23.08 MiB/s [2024-10-07 09:52:10.346727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.865 [2024-10-07 09:52:10.347298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.865 [2024-10-07 09:52:10.347328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.865 [2024-10-07 09:52:10.347337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.865 [2024-10-07 09:52:10.347501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.865 [2024-10-07 09:52:10.347659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.865 [2024-10-07 09:52:10.347666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.865 [2024-10-07 09:52:10.347672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.865 [2024-10-07 09:52:10.350062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.865 [2024-10-07 09:52:10.359342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.865 [2024-10-07 09:52:10.359910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.865 [2024-10-07 09:52:10.359941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.865 [2024-10-07 09:52:10.359949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.865 [2024-10-07 09:52:10.360113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.865 [2024-10-07 09:52:10.360264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.865 [2024-10-07 09:52:10.360270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.865 [2024-10-07 09:52:10.360276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.865 [2024-10-07 09:52:10.362675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.865 [2024-10-07 09:52:10.371956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.865 [2024-10-07 09:52:10.372340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.865 [2024-10-07 09:52:10.372356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.865 [2024-10-07 09:52:10.372361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.865 [2024-10-07 09:52:10.372510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.865 [2024-10-07 09:52:10.372664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.865 [2024-10-07 09:52:10.372671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.865 [2024-10-07 09:52:10.372676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.865 [2024-10-07 09:52:10.375063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.865 [2024-10-07 09:52:10.384639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.865 [2024-10-07 09:52:10.385212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.865 [2024-10-07 09:52:10.385243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.865 [2024-10-07 09:52:10.385252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.865 [2024-10-07 09:52:10.385415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.385567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.385573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.866 [2024-10-07 09:52:10.385578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.866 [2024-10-07 09:52:10.387977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.866 [2024-10-07 09:52:10.397271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.866 [2024-10-07 09:52:10.397646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.866 [2024-10-07 09:52:10.397662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.866 [2024-10-07 09:52:10.397667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.866 [2024-10-07 09:52:10.397816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.397965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.397970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.866 [2024-10-07 09:52:10.397975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.866 [2024-10-07 09:52:10.400361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.866 [2024-10-07 09:52:10.409925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.866 [2024-10-07 09:52:10.410373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.866 [2024-10-07 09:52:10.410385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.866 [2024-10-07 09:52:10.410390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.866 [2024-10-07 09:52:10.410538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.410691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.410697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.866 [2024-10-07 09:52:10.410702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.866 [2024-10-07 09:52:10.413088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.866 [2024-10-07 09:52:10.422511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.866 [2024-10-07 09:52:10.422985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.866 [2024-10-07 09:52:10.422997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.866 [2024-10-07 09:52:10.423002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.866 [2024-10-07 09:52:10.423151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.423305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.423310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.866 [2024-10-07 09:52:10.423315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.866 [2024-10-07 09:52:10.425704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.866 [2024-10-07 09:52:10.435128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.866 [2024-10-07 09:52:10.435580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.866 [2024-10-07 09:52:10.435592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.866 [2024-10-07 09:52:10.435598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.866 [2024-10-07 09:52:10.435750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.435899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.435904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.866 [2024-10-07 09:52:10.435909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.866 [2024-10-07 09:52:10.438293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.866 [2024-10-07 09:52:10.447725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.866 [2024-10-07 09:52:10.448176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.866 [2024-10-07 09:52:10.448188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.866 [2024-10-07 09:52:10.448193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.866 [2024-10-07 09:52:10.448341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.448489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.448495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.866 [2024-10-07 09:52:10.448500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.866 [2024-10-07 09:52:10.450887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.866 [2024-10-07 09:52:10.460305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.866 [2024-10-07 09:52:10.460640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.866 [2024-10-07 09:52:10.460652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.866 [2024-10-07 09:52:10.460657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.866 [2024-10-07 09:52:10.460805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.460952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.460957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.866 [2024-10-07 09:52:10.460963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.866 [2024-10-07 09:52:10.463351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.866 [2024-10-07 09:52:10.472918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.866 [2024-10-07 09:52:10.473363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.866 [2024-10-07 09:52:10.473374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.866 [2024-10-07 09:52:10.473380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.866 [2024-10-07 09:52:10.473528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.473679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.473686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.866 [2024-10-07 09:52:10.473691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.866 [2024-10-07 09:52:10.476077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.866 [2024-10-07 09:52:10.485572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.866 [2024-10-07 09:52:10.486027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.866 [2024-10-07 09:52:10.486039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.866 [2024-10-07 09:52:10.486044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.866 [2024-10-07 09:52:10.486192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.486339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.486345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.866 [2024-10-07 09:52:10.486350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.866 [2024-10-07 09:52:10.488738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.866 [2024-10-07 09:52:10.498165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.866 [2024-10-07 09:52:10.498647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.866 [2024-10-07 09:52:10.498659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.866 [2024-10-07 09:52:10.498664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.866 [2024-10-07 09:52:10.498812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.498959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.498965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.866 [2024-10-07 09:52:10.498970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.866 [2024-10-07 09:52:10.501356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.866 [2024-10-07 09:52:10.510780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.866 [2024-10-07 09:52:10.511384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.866 [2024-10-07 09:52:10.511415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.866 [2024-10-07 09:52:10.511428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.866 [2024-10-07 09:52:10.511594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.866 [2024-10-07 09:52:10.511751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.866 [2024-10-07 09:52:10.511759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.867 [2024-10-07 09:52:10.511764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.867 [2024-10-07 09:52:10.514155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.867 [2024-10-07 09:52:10.523444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.867 [2024-10-07 09:52:10.524034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.867 [2024-10-07 09:52:10.524065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:10.867 [2024-10-07 09:52:10.524074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:10.867 [2024-10-07 09:52:10.524237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:10.867 [2024-10-07 09:52:10.524388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.867 [2024-10-07 09:52:10.524395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.867 [2024-10-07 09:52:10.524401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.130 [2024-10-07 09:52:10.526798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.130 [2024-10-07 09:52:10.536096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.130 [2024-10-07 09:52:10.536702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.130 [2024-10-07 09:52:10.536732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.130 [2024-10-07 09:52:10.536741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.130 [2024-10-07 09:52:10.536907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.130 [2024-10-07 09:52:10.537058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.130 [2024-10-07 09:52:10.537064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.130 [2024-10-07 09:52:10.537070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.130 [2024-10-07 09:52:10.539468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.130 [2024-10-07 09:52:10.548768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.130 [2024-10-07 09:52:10.549262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.130 [2024-10-07 09:52:10.549276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.130 [2024-10-07 09:52:10.549282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.130 [2024-10-07 09:52:10.549431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.130 [2024-10-07 09:52:10.549582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.130 [2024-10-07 09:52:10.549588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.130 [2024-10-07 09:52:10.549593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.130 [2024-10-07 09:52:10.552133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.130 [2024-10-07 09:52:10.561425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.130 [2024-10-07 09:52:10.561970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.130 [2024-10-07 09:52:10.562001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.130 [2024-10-07 09:52:10.562010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.562174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.131 [2024-10-07 09:52:10.562325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.131 [2024-10-07 09:52:10.562331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.131 [2024-10-07 09:52:10.562337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.131 [2024-10-07 09:52:10.564737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.131 [2024-10-07 09:52:10.574026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.131 [2024-10-07 09:52:10.574600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.131 [2024-10-07 09:52:10.574637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.131 [2024-10-07 09:52:10.574646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.574813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.131 [2024-10-07 09:52:10.574964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.131 [2024-10-07 09:52:10.574970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.131 [2024-10-07 09:52:10.574975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.131 [2024-10-07 09:52:10.577369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.131 [2024-10-07 09:52:10.586675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.131 [2024-10-07 09:52:10.587138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.131 [2024-10-07 09:52:10.587169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.131 [2024-10-07 09:52:10.587178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.587342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.131 [2024-10-07 09:52:10.587493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.131 [2024-10-07 09:52:10.587499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.131 [2024-10-07 09:52:10.587505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.131 [2024-10-07 09:52:10.589902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.131 [2024-10-07 09:52:10.599350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.131 [2024-10-07 09:52:10.599860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.131 [2024-10-07 09:52:10.599876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.131 [2024-10-07 09:52:10.599882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.600030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.131 [2024-10-07 09:52:10.600178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.131 [2024-10-07 09:52:10.600184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.131 [2024-10-07 09:52:10.600189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.131 [2024-10-07 09:52:10.602578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.131 [2024-10-07 09:52:10.612020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.131 [2024-10-07 09:52:10.612499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.131 [2024-10-07 09:52:10.612512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.131 [2024-10-07 09:52:10.612517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.612670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.131 [2024-10-07 09:52:10.612818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.131 [2024-10-07 09:52:10.612824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.131 [2024-10-07 09:52:10.612829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.131 [2024-10-07 09:52:10.615260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.131 [2024-10-07 09:52:10.624705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.131 [2024-10-07 09:52:10.625179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.131 [2024-10-07 09:52:10.625191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.131 [2024-10-07 09:52:10.625196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.625344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.131 [2024-10-07 09:52:10.625492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.131 [2024-10-07 09:52:10.625498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.131 [2024-10-07 09:52:10.625503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.131 [2024-10-07 09:52:10.627897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.131 [2024-10-07 09:52:10.637336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.131 [2024-10-07 09:52:10.637809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.131 [2024-10-07 09:52:10.637822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.131 [2024-10-07 09:52:10.637830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.637979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.131 [2024-10-07 09:52:10.638127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.131 [2024-10-07 09:52:10.638132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.131 [2024-10-07 09:52:10.638137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.131 [2024-10-07 09:52:10.640527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.131 [2024-10-07 09:52:10.649976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.131 [2024-10-07 09:52:10.650422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.131 [2024-10-07 09:52:10.650434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.131 [2024-10-07 09:52:10.650440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.650588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.131 [2024-10-07 09:52:10.650741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.131 [2024-10-07 09:52:10.650747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.131 [2024-10-07 09:52:10.650752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.131 [2024-10-07 09:52:10.653141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.131 [2024-10-07 09:52:10.662577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.131 [2024-10-07 09:52:10.663033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.131 [2024-10-07 09:52:10.663045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.131 [2024-10-07 09:52:10.663050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.663198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.131 [2024-10-07 09:52:10.663345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.131 [2024-10-07 09:52:10.663351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.131 [2024-10-07 09:52:10.663356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.131 [2024-10-07 09:52:10.665751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.131 [2024-10-07 09:52:10.675188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.131 [2024-10-07 09:52:10.675518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.131 [2024-10-07 09:52:10.675529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.131 [2024-10-07 09:52:10.675534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.675687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.131 [2024-10-07 09:52:10.675835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.131 [2024-10-07 09:52:10.675843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.131 [2024-10-07 09:52:10.675848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.131 [2024-10-07 09:52:10.678237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.131 [2024-10-07 09:52:10.687814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.131 [2024-10-07 09:52:10.688255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.131 [2024-10-07 09:52:10.688267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.131 [2024-10-07 09:52:10.688272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.131 [2024-10-07 09:52:10.688420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.132 [2024-10-07 09:52:10.688567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.132 [2024-10-07 09:52:10.688573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.132 [2024-10-07 09:52:10.688578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.132 [2024-10-07 09:52:10.690972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.132 [2024-10-07 09:52:10.700417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.132 [2024-10-07 09:52:10.700865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.132 [2024-10-07 09:52:10.700877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.132 [2024-10-07 09:52:10.700883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.132 [2024-10-07 09:52:10.701031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.132 [2024-10-07 09:52:10.701179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.132 [2024-10-07 09:52:10.701184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.132 [2024-10-07 09:52:10.701189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.132 [2024-10-07 09:52:10.703577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.132 [2024-10-07 09:52:10.713016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.132 [2024-10-07 09:52:10.713465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.132 [2024-10-07 09:52:10.713477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.132 [2024-10-07 09:52:10.713482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.132 [2024-10-07 09:52:10.713634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.132 [2024-10-07 09:52:10.713783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.132 [2024-10-07 09:52:10.713789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.132 [2024-10-07 09:52:10.713794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.132 [2024-10-07 09:52:10.716182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.132 [2024-10-07 09:52:10.725621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.132 [2024-10-07 09:52:10.726212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.132 [2024-10-07 09:52:10.726243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.132 [2024-10-07 09:52:10.726252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.132 [2024-10-07 09:52:10.726415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.132 [2024-10-07 09:52:10.726567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.132 [2024-10-07 09:52:10.726573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.132 [2024-10-07 09:52:10.726579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.132 [2024-10-07 09:52:10.728981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.132 [2024-10-07 09:52:10.738280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.132 [2024-10-07 09:52:10.738747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.132 [2024-10-07 09:52:10.738762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.132 [2024-10-07 09:52:10.738768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.132 [2024-10-07 09:52:10.738916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.132 [2024-10-07 09:52:10.739064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.132 [2024-10-07 09:52:10.739069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.132 [2024-10-07 09:52:10.739074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.132 [2024-10-07 09:52:10.741465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.132 [2024-10-07 09:52:10.750921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.132 [2024-10-07 09:52:10.751242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.132 [2024-10-07 09:52:10.751255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.132 [2024-10-07 09:52:10.751261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.132 [2024-10-07 09:52:10.751408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.132 [2024-10-07 09:52:10.751555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.132 [2024-10-07 09:52:10.751561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.132 [2024-10-07 09:52:10.751566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.132 [2024-10-07 09:52:10.753961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.132 [2024-10-07 09:52:10.763536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.132 [2024-10-07 09:52:10.763988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.132 [2024-10-07 09:52:10.764000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.132 [2024-10-07 09:52:10.764006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.132 [2024-10-07 09:52:10.764158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.132 [2024-10-07 09:52:10.764306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.132 [2024-10-07 09:52:10.764312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.132 [2024-10-07 09:52:10.764317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.132 [2024-10-07 09:52:10.766708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.132 [2024-10-07 09:52:10.776149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.132 [2024-10-07 09:52:10.776633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.132 [2024-10-07 09:52:10.776646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.132 [2024-10-07 09:52:10.776652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.132 [2024-10-07 09:52:10.776800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.132 [2024-10-07 09:52:10.776947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.132 [2024-10-07 09:52:10.776953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.132 [2024-10-07 09:52:10.776958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.132 [2024-10-07 09:52:10.779346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.132 [2024-10-07 09:52:10.788786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.132 [2024-10-07 09:52:10.789271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.132 [2024-10-07 09:52:10.789282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.132 [2024-10-07 09:52:10.789288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.132 [2024-10-07 09:52:10.789435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.132 [2024-10-07 09:52:10.789583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.132 [2024-10-07 09:52:10.789589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.132 [2024-10-07 09:52:10.789593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.791985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.801428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.801887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.801899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.801906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.802054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.802202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.802207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.802216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.804603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.814038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.814510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.814522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.814527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.814679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.814828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.814834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.814839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.817223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.826659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.827226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.827256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.827265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.827428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.827579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.827585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.827591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.829996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3544885 Killed "${NVMF_APP[@]}" "$@" 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:11.395 [2024-10-07 09:52:10.839301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.839695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.839711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.839716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.839865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.840013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.840022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.840028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.842422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3546491 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3546491 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # '[' -z 3546491 ']' 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local max_retries=100 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@843 -- # xtrace_disable 00:31:11.395 09:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:11.395 [2024-10-07 09:52:10.851877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.852231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.852245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.852251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.852400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.852548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.852554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.852559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.854956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.864585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.865063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.865077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.865083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.865232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.865380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.865386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.865392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.867787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.877223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.877742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.877755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.877760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.877909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.878057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.878063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.878068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.880458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.889899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.890356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.890368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.890373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.890522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.890675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.890682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.890687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.893075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.897534] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:11.395 [2024-10-07 09:52:10.897580] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.395 [2024-10-07 09:52:10.902520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.903058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.903070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.903076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.903224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.903373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.903379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.903384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.905778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.915212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.915672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.915687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.915693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.915842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.915990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.915995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.916000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.918389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.927827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.928164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.928176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.928182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.928330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.928478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.928485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.928489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.930882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.940434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.940966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.940998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.941007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.941173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.941324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.941331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.941337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.943737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.953052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.953328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.953342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.953348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.953498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.953656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.953663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.953668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.956060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.965639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.395 [2024-10-07 09:52:10.966179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.395 [2024-10-07 09:52:10.966210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.395 [2024-10-07 09:52:10.966219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.395 [2024-10-07 09:52:10.966383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.395 [2024-10-07 09:52:10.966534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.395 [2024-10-07 09:52:10.966541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.395 [2024-10-07 09:52:10.966547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.395 [2024-10-07 09:52:10.968946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.395 [2024-10-07 09:52:10.978241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.396 [2024-10-07 09:52:10.978871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.396 [2024-10-07 09:52:10.978902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.396 [2024-10-07 09:52:10.978911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.396 [2024-10-07 09:52:10.979075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.396 [2024-10-07 09:52:10.979226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.396 [2024-10-07 09:52:10.979232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.396 [2024-10-07 09:52:10.979238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.396 [2024-10-07 09:52:10.981637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.396 [2024-10-07 09:52:10.981910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:11.396 [2024-10-07 09:52:10.990938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.396 [2024-10-07 09:52:10.991568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.396 [2024-10-07 09:52:10.991599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.396 [2024-10-07 09:52:10.991609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.396 [2024-10-07 09:52:10.991779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.396 [2024-10-07 09:52:10.991931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.396 [2024-10-07 09:52:10.991938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.396 [2024-10-07 09:52:10.991946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.396 [2024-10-07 09:52:10.994339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.396 [2024-10-07 09:52:11.003508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.396 [2024-10-07 09:52:11.004026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.396 [2024-10-07 09:52:11.004042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.396 [2024-10-07 09:52:11.004048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.396 [2024-10-07 09:52:11.004196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.396 [2024-10-07 09:52:11.004344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.396 [2024-10-07 09:52:11.004350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.396 [2024-10-07 09:52:11.004355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.396 [2024-10-07 09:52:11.006747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.396 [2024-10-07 09:52:11.016178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.396 [2024-10-07 09:52:11.016837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.396 [2024-10-07 09:52:11.016871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.396 [2024-10-07 09:52:11.016880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.396 [2024-10-07 09:52:11.017045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.396 [2024-10-07 09:52:11.017197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.396 [2024-10-07 09:52:11.017203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.396 [2024-10-07 09:52:11.017209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.396 [2024-10-07 09:52:11.019604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.396 [2024-10-07 09:52:11.028770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.396 [2024-10-07 09:52:11.029352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.396 [2024-10-07 09:52:11.029384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.396 [2024-10-07 09:52:11.029393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.396 [2024-10-07 09:52:11.029558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.396 [2024-10-07 09:52:11.029717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.396 [2024-10-07 09:52:11.029724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.396 [2024-10-07 09:52:11.029729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.396 [2024-10-07 09:52:11.032124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.396 [2024-10-07 09:52:11.034891] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.396 [2024-10-07 09:52:11.034916] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.396 [2024-10-07 09:52:11.034927] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.396 [2024-10-07 09:52:11.034932] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.396 [2024-10-07 09:52:11.034936] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.396 [2024-10-07 09:52:11.035692] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.396 [2024-10-07 09:52:11.035844] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.396 [2024-10-07 09:52:11.035847] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:11.396 [2024-10-07 09:52:11.041436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.396 [2024-10-07 09:52:11.041922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.396 [2024-10-07 09:52:11.041937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.396 [2024-10-07 09:52:11.041944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.396 [2024-10-07 09:52:11.042093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.396 [2024-10-07 09:52:11.042242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.396 [2024-10-07 09:52:11.042247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.396 [2024-10-07 09:52:11.042253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.396 [2024-10-07 09:52:11.044649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.396 [2024-10-07 09:52:11.054103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.396 [2024-10-07 09:52:11.054612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.396 [2024-10-07 09:52:11.054632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.396 [2024-10-07 09:52:11.054638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.396 [2024-10-07 09:52:11.054787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.396 [2024-10-07 09:52:11.054936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.396 [2024-10-07 09:52:11.054942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.396 [2024-10-07 09:52:11.054947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.657 [2024-10-07 09:52:11.057335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.657 [2024-10-07 09:52:11.066677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.657 [2024-10-07 09:52:11.067161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-10-07 09:52:11.067174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.657 [2024-10-07 09:52:11.067180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.657 [2024-10-07 09:52:11.067329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.657 [2024-10-07 09:52:11.067476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.657 [2024-10-07 09:52:11.067483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.657 [2024-10-07 09:52:11.067497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.657 [2024-10-07 09:52:11.069891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.657 [2024-10-07 09:52:11.079329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.657 [2024-10-07 09:52:11.079943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-10-07 09:52:11.079979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.657 [2024-10-07 09:52:11.079989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.657 [2024-10-07 09:52:11.080157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.657 [2024-10-07 09:52:11.080309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.657 [2024-10-07 09:52:11.080316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.657 [2024-10-07 09:52:11.080321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.657 [2024-10-07 09:52:11.082719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.657 [2024-10-07 09:52:11.092010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.657 [2024-10-07 09:52:11.092490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-10-07 09:52:11.092522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.657 [2024-10-07 09:52:11.092532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.657 [2024-10-07 09:52:11.092704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.657 [2024-10-07 09:52:11.092855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.657 [2024-10-07 09:52:11.092862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.657 [2024-10-07 09:52:11.092868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.657 [2024-10-07 09:52:11.095270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.657 [2024-10-07 09:52:11.104704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.657 [2024-10-07 09:52:11.105198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-10-07 09:52:11.105213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.657 [2024-10-07 09:52:11.105218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.657 [2024-10-07 09:52:11.105367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.657 [2024-10-07 09:52:11.105516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.657 [2024-10-07 09:52:11.105521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.657 [2024-10-07 09:52:11.105526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.657 [2024-10-07 09:52:11.107918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.657 [2024-10-07 09:52:11.117343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.657 [2024-10-07 09:52:11.117933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-10-07 09:52:11.117968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.657 [2024-10-07 09:52:11.117977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.657 [2024-10-07 09:52:11.118143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.657 [2024-10-07 09:52:11.118294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.657 [2024-10-07 09:52:11.118300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.657 [2024-10-07 09:52:11.118306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.657 [2024-10-07 09:52:11.120704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.657 [2024-10-07 09:52:11.129989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.657 [2024-10-07 09:52:11.130443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-10-07 09:52:11.130458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.657 [2024-10-07 09:52:11.130463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.657 [2024-10-07 09:52:11.130612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.657 [2024-10-07 09:52:11.130766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.657 [2024-10-07 09:52:11.130772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.657 [2024-10-07 09:52:11.130777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.133163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.142586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.143195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-10-07 09:52:11.143226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.658 [2024-10-07 09:52:11.143235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.658 [2024-10-07 09:52:11.143399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.658 [2024-10-07 09:52:11.143551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.658 [2024-10-07 09:52:11.143557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.658 [2024-10-07 09:52:11.143562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.145960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.155258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.155833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-10-07 09:52:11.155864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.658 [2024-10-07 09:52:11.155874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.658 [2024-10-07 09:52:11.156038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.658 [2024-10-07 09:52:11.156193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.658 [2024-10-07 09:52:11.156200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.658 [2024-10-07 09:52:11.156205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.158599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.167898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.168491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-10-07 09:52:11.168522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.658 [2024-10-07 09:52:11.168531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.658 [2024-10-07 09:52:11.168701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.658 [2024-10-07 09:52:11.168853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.658 [2024-10-07 09:52:11.168860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.658 [2024-10-07 09:52:11.168865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.171255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.180538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.181121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-10-07 09:52:11.181152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.658 [2024-10-07 09:52:11.181161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.658 [2024-10-07 09:52:11.181325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.658 [2024-10-07 09:52:11.181476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.658 [2024-10-07 09:52:11.181483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.658 [2024-10-07 09:52:11.181488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.183883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.193166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.193815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-10-07 09:52:11.193846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.658 [2024-10-07 09:52:11.193854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.658 [2024-10-07 09:52:11.194018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.658 [2024-10-07 09:52:11.194169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.658 [2024-10-07 09:52:11.194176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.658 [2024-10-07 09:52:11.194181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.196591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.205739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.206074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-10-07 09:52:11.206088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.658 [2024-10-07 09:52:11.206094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.658 [2024-10-07 09:52:11.206242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.658 [2024-10-07 09:52:11.206391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.658 [2024-10-07 09:52:11.206396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.658 [2024-10-07 09:52:11.206401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.208791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.218350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.218931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-10-07 09:52:11.218962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.658 [2024-10-07 09:52:11.218971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.658 [2024-10-07 09:52:11.219135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.658 [2024-10-07 09:52:11.219286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.658 [2024-10-07 09:52:11.219292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.658 [2024-10-07 09:52:11.219298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.221695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.230982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.231493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-10-07 09:52:11.231507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.658 [2024-10-07 09:52:11.231513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.658 [2024-10-07 09:52:11.231666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.658 [2024-10-07 09:52:11.231815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.658 [2024-10-07 09:52:11.231820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.658 [2024-10-07 09:52:11.231825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.234210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.243632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.244103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-10-07 09:52:11.244115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.658 [2024-10-07 09:52:11.244124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.658 [2024-10-07 09:52:11.244272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.658 [2024-10-07 09:52:11.244420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.658 [2024-10-07 09:52:11.244426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.658 [2024-10-07 09:52:11.244431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.246823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.256247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.256780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-10-07 09:52:11.256792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.658 [2024-10-07 09:52:11.256798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.658 [2024-10-07 09:52:11.256946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.658 [2024-10-07 09:52:11.257094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.658 [2024-10-07 09:52:11.257099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.658 [2024-10-07 09:52:11.257105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.658 [2024-10-07 09:52:11.259488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.658 [2024-10-07 09:52:11.268910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.658 [2024-10-07 09:52:11.269370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-10-07 09:52:11.269401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.659 [2024-10-07 09:52:11.269411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.659 [2024-10-07 09:52:11.269574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.659 [2024-10-07 09:52:11.269731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.659 [2024-10-07 09:52:11.269738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.659 [2024-10-07 09:52:11.269744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.659 [2024-10-07 09:52:11.272133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.659 [2024-10-07 09:52:11.281555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.659 [2024-10-07 09:52:11.282063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-10-07 09:52:11.282078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.659 [2024-10-07 09:52:11.282083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.659 [2024-10-07 09:52:11.282232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.659 [2024-10-07 09:52:11.282380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.659 [2024-10-07 09:52:11.282389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.659 [2024-10-07 09:52:11.282394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.659 [2024-10-07 09:52:11.284783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.659 [2024-10-07 09:52:11.294199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.659 [2024-10-07 09:52:11.294653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-10-07 09:52:11.294665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.659 [2024-10-07 09:52:11.294671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.659 [2024-10-07 09:52:11.294819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.659 [2024-10-07 09:52:11.294967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.659 [2024-10-07 09:52:11.294972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.659 [2024-10-07 09:52:11.294977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.659 [2024-10-07 09:52:11.297369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.659 [2024-10-07 09:52:11.306785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.659 [2024-10-07 09:52:11.307238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-10-07 09:52:11.307250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.659 [2024-10-07 09:52:11.307255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.659 [2024-10-07 09:52:11.307403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.659 [2024-10-07 09:52:11.307551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.659 [2024-10-07 09:52:11.307556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.659 [2024-10-07 09:52:11.307561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.659 [2024-10-07 09:52:11.309948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.920 [2024-10-07 09:52:11.319366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.920 [2024-10-07 09:52:11.319928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.920 [2024-10-07 09:52:11.319960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.920 [2024-10-07 09:52:11.319969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.920 [2024-10-07 09:52:11.320133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.920 [2024-10-07 09:52:11.320284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.920 [2024-10-07 09:52:11.320290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.920 [2024-10-07 09:52:11.320296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.920 [2024-10-07 09:52:11.322690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.920 [2024-10-07 09:52:11.331979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.920 [2024-10-07 09:52:11.332309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.920 [2024-10-07 09:52:11.332324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.332329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.332478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.332632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.921 [2024-10-07 09:52:11.332638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.921 [2024-10-07 09:52:11.332643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.921 [2024-10-07 09:52:11.335030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.921 4922.83 IOPS, 19.23 MiB/s [2024-10-07 09:52:11.345726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.921 [2024-10-07 09:52:11.346287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.921 [2024-10-07 09:52:11.346318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.346327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.346490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.346648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.921 [2024-10-07 09:52:11.346655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.921 [2024-10-07 09:52:11.346660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.921 [2024-10-07 09:52:11.349051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.921 [2024-10-07 09:52:11.358345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.921 [2024-10-07 09:52:11.358690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.921 [2024-10-07 09:52:11.358705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.358711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.358859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.359008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.921 [2024-10-07 09:52:11.359014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.921 [2024-10-07 09:52:11.359019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.921 [2024-10-07 09:52:11.361404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.921 [2024-10-07 09:52:11.370963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.921 [2024-10-07 09:52:11.371429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.921 [2024-10-07 09:52:11.371441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.371446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.371597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.371750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.921 [2024-10-07 09:52:11.371756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.921 [2024-10-07 09:52:11.371761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.921 [2024-10-07 09:52:11.374145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.921 [2024-10-07 09:52:11.383601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.921 [2024-10-07 09:52:11.384189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.921 [2024-10-07 09:52:11.384221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.384230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.384395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.384545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.921 [2024-10-07 09:52:11.384551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.921 [2024-10-07 09:52:11.384557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.921 [2024-10-07 09:52:11.386954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.921 [2024-10-07 09:52:11.396245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.921 [2024-10-07 09:52:11.396599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.921 [2024-10-07 09:52:11.396613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.396622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.396771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.396919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.921 [2024-10-07 09:52:11.396925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.921 [2024-10-07 09:52:11.396929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.921 [2024-10-07 09:52:11.399313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.921 [2024-10-07 09:52:11.408872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.921 [2024-10-07 09:52:11.409324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.921 [2024-10-07 09:52:11.409337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.409342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.409490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.409642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.921 [2024-10-07 09:52:11.409648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.921 [2024-10-07 09:52:11.409657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.921 [2024-10-07 09:52:11.412041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.921 [2024-10-07 09:52:11.421457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.921 [2024-10-07 09:52:11.422041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.921 [2024-10-07 09:52:11.422072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.422081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.422245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.422396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.921 [2024-10-07 09:52:11.422403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.921 [2024-10-07 09:52:11.422408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.921 [2024-10-07 09:52:11.424801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.921 [2024-10-07 09:52:11.434080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.921 [2024-10-07 09:52:11.434664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.921 [2024-10-07 09:52:11.434695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.434704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.434870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.435021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.921 [2024-10-07 09:52:11.435027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.921 [2024-10-07 09:52:11.435032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.921 [2024-10-07 09:52:11.437429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.921 [2024-10-07 09:52:11.446713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.921 [2024-10-07 09:52:11.447282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.921 [2024-10-07 09:52:11.447313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.447322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.447486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.447644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.921 [2024-10-07 09:52:11.447651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.921 [2024-10-07 09:52:11.447656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.921 [2024-10-07 09:52:11.450055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.921 [2024-10-07 09:52:11.459340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.921 [2024-10-07 09:52:11.459841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.921 [2024-10-07 09:52:11.459856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.921 [2024-10-07 09:52:11.459862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.921 [2024-10-07 09:52:11.460010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.921 [2024-10-07 09:52:11.460158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.922 [2024-10-07 09:52:11.460164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.922 [2024-10-07 09:52:11.460169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.922 [2024-10-07 09:52:11.462554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.922 [2024-10-07 09:52:11.471971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.922 [2024-10-07 09:52:11.472431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.922 [2024-10-07 09:52:11.472443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.922 [2024-10-07 09:52:11.472448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.922 [2024-10-07 09:52:11.472597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.922 [2024-10-07 09:52:11.472748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.922 [2024-10-07 09:52:11.472754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.922 [2024-10-07 09:52:11.472759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.922 [2024-10-07 09:52:11.475142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.922 [2024-10-07 09:52:11.484557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.922 [2024-10-07 09:52:11.485013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.922 [2024-10-07 09:52:11.485025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.922 [2024-10-07 09:52:11.485031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.922 [2024-10-07 09:52:11.485178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.922 [2024-10-07 09:52:11.485326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.922 [2024-10-07 09:52:11.485331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.922 [2024-10-07 09:52:11.485336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.922 [2024-10-07 09:52:11.487722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.922 [2024-10-07 09:52:11.497143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.922 [2024-10-07 09:52:11.497600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.922 [2024-10-07 09:52:11.497611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.922 [2024-10-07 09:52:11.497620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.922 [2024-10-07 09:52:11.497771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.922 [2024-10-07 09:52:11.497918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.922 [2024-10-07 09:52:11.497924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.922 [2024-10-07 09:52:11.497929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.922 [2024-10-07 09:52:11.500312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.922 [2024-10-07 09:52:11.509728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.922 [2024-10-07 09:52:11.510197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.922 [2024-10-07 09:52:11.510209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.922 [2024-10-07 09:52:11.510214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.922 [2024-10-07 09:52:11.510362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.922 [2024-10-07 09:52:11.510510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.922 [2024-10-07 09:52:11.510516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.922 [2024-10-07 09:52:11.510521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.922 [2024-10-07 09:52:11.512946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.922 [2024-10-07 09:52:11.522375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.922 [2024-10-07 09:52:11.522728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.922 [2024-10-07 09:52:11.522741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.922 [2024-10-07 09:52:11.522746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.922 [2024-10-07 09:52:11.522895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.922 [2024-10-07 09:52:11.523042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.922 [2024-10-07 09:52:11.523048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.922 [2024-10-07 09:52:11.523053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.922 [2024-10-07 09:52:11.525435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.922 [2024-10-07 09:52:11.535003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.922 [2024-10-07 09:52:11.535458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.922 [2024-10-07 09:52:11.535470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.922 [2024-10-07 09:52:11.535475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.922 [2024-10-07 09:52:11.535627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.922 [2024-10-07 09:52:11.535776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.922 [2024-10-07 09:52:11.535781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.922 [2024-10-07 09:52:11.535790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.922 [2024-10-07 09:52:11.538176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.922 [2024-10-07 09:52:11.547598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.922 [2024-10-07 09:52:11.548043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.922 [2024-10-07 09:52:11.548074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.922 [2024-10-07 09:52:11.548083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.922 [2024-10-07 09:52:11.548248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.922 [2024-10-07 09:52:11.548399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.922 [2024-10-07 09:52:11.548405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.922 [2024-10-07 09:52:11.548411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.922 [2024-10-07 09:52:11.550818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.922 [2024-10-07 09:52:11.560249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.922 [2024-10-07 09:52:11.560886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.922 [2024-10-07 09:52:11.560917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.922 [2024-10-07 09:52:11.560926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.922 [2024-10-07 09:52:11.561091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.922 [2024-10-07 09:52:11.561242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.922 [2024-10-07 09:52:11.561248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.922 [2024-10-07 09:52:11.561253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.922 [2024-10-07 09:52:11.563651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.922 [2024-10-07 09:52:11.572938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.922 [2024-10-07 09:52:11.573418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.922 [2024-10-07 09:52:11.573449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:11.922 [2024-10-07 09:52:11.573458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:11.922 [2024-10-07 09:52:11.573630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:11.922 [2024-10-07 09:52:11.573782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.922 [2024-10-07 09:52:11.573789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.922 [2024-10-07 09:52:11.573794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.922 [2024-10-07 09:52:11.576183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.185 [2024-10-07 09:52:11.585610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.185 [2024-10-07 09:52:11.586211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.185 [2024-10-07 09:52:11.586246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.185 [2024-10-07 09:52:11.586255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.185 [2024-10-07 09:52:11.586418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.185 [2024-10-07 09:52:11.586570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.185 [2024-10-07 09:52:11.586576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.185 [2024-10-07 09:52:11.586582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.185 [2024-10-07 09:52:11.588978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.185 [2024-10-07 09:52:11.598271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.185 [2024-10-07 09:52:11.598626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.185 [2024-10-07 09:52:11.598641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.185 [2024-10-07 09:52:11.598647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.185 [2024-10-07 09:52:11.598795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.185 [2024-10-07 09:52:11.598943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.185 [2024-10-07 09:52:11.598948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.185 [2024-10-07 09:52:11.598953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.185 [2024-10-07 09:52:11.601339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.185 [2024-10-07 09:52:11.610910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.185 [2024-10-07 09:52:11.611369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.185 [2024-10-07 09:52:11.611382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.185 [2024-10-07 09:52:11.611388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.185 [2024-10-07 09:52:11.611537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.185 [2024-10-07 09:52:11.611690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.185 [2024-10-07 09:52:11.611697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.185 [2024-10-07 09:52:11.611702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.185 [2024-10-07 09:52:11.614088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.185 [2024-10-07 09:52:11.623512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.185 [2024-10-07 09:52:11.624061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.185 [2024-10-07 09:52:11.624092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.185 [2024-10-07 09:52:11.624102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.185 [2024-10-07 09:52:11.624266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.185 [2024-10-07 09:52:11.624421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.185 [2024-10-07 09:52:11.624428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.185 [2024-10-07 09:52:11.624434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.185 [2024-10-07 09:52:11.626832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.185 [2024-10-07 09:52:11.636125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.185 [2024-10-07 09:52:11.636573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.185 [2024-10-07 09:52:11.636589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.185 [2024-10-07 09:52:11.636594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.185 [2024-10-07 09:52:11.636747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.185 [2024-10-07 09:52:11.636896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.185 [2024-10-07 09:52:11.636904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.185 [2024-10-07 09:52:11.636909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.185 [2024-10-07 09:52:11.639299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.185 [2024-10-07 09:52:11.648742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.185 [2024-10-07 09:52:11.649287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.185 [2024-10-07 09:52:11.649318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.185 [2024-10-07 09:52:11.649328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.185 [2024-10-07 09:52:11.649495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.185 [2024-10-07 09:52:11.649653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.185 [2024-10-07 09:52:11.649660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.185 [2024-10-07 09:52:11.649665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.185 [2024-10-07 09:52:11.652065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.185 [2024-10-07 09:52:11.661359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.186 [2024-10-07 09:52:11.661981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.186 [2024-10-07 09:52:11.662013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.186 [2024-10-07 09:52:11.662022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.186 [2024-10-07 09:52:11.662185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.186 [2024-10-07 09:52:11.662336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.186 [2024-10-07 09:52:11.662343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.186 [2024-10-07 09:52:11.662349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.186 [2024-10-07 09:52:11.664751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.186 [2024-10-07 09:52:11.674040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.186 [2024-10-07 09:52:11.674516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.186 [2024-10-07 09:52:11.674547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.186 [2024-10-07 09:52:11.674556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.186 [2024-10-07 09:52:11.674726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.186 [2024-10-07 09:52:11.674878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.186 [2024-10-07 09:52:11.674884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.186 [2024-10-07 09:52:11.674890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.186 [2024-10-07 09:52:11.677280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.186 [2024-10-07 09:52:11.686712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.186 [2024-10-07 09:52:11.687202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.186 [2024-10-07 09:52:11.687216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.186 [2024-10-07 09:52:11.687222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.186 [2024-10-07 09:52:11.687371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.186 [2024-10-07 09:52:11.687519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.186 [2024-10-07 09:52:11.687525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.186 [2024-10-07 09:52:11.687530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.186 [2024-10-07 09:52:11.689921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@867 -- # return 0 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@733 -- # xtrace_disable 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:12.186 [2024-10-07 09:52:11.699358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.186 [2024-10-07 09:52:11.699931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.186 [2024-10-07 09:52:11.699962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.186 [2024-10-07 09:52:11.699971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.186 [2024-10-07 09:52:11.700135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.186 [2024-10-07 09:52:11.700286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.186 [2024-10-07 09:52:11.700293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.186 [2024-10-07 09:52:11.700298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.186 [2024-10-07 09:52:11.702707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.186 [2024-10-07 09:52:11.712004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.186 [2024-10-07 09:52:11.712605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.186 [2024-10-07 09:52:11.712644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.186 [2024-10-07 09:52:11.712653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.186 [2024-10-07 09:52:11.712819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.186 [2024-10-07 09:52:11.712971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.186 [2024-10-07 09:52:11.712978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.186 [2024-10-07 09:52:11.712983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.186 [2024-10-07 09:52:11.715373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.186 [2024-10-07 09:52:11.724666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.186 [2024-10-07 09:52:11.725160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.186 [2024-10-07 09:52:11.725174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.186 [2024-10-07 09:52:11.725180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.186 [2024-10-07 09:52:11.725328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.186 [2024-10-07 09:52:11.725478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.186 [2024-10-07 09:52:11.725484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.186 [2024-10-07 09:52:11.725489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.186 [2024-10-07 09:52:11.727879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.186 [2024-10-07 09:52:11.737308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:12.186 [2024-10-07 09:52:11.737859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.186 [2024-10-07 09:52:11.737891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.186 [2024-10-07 09:52:11.737900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.186 [2024-10-07 09:52:11.738064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:12.186 [2024-10-07 09:52:11.738216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.186 [2024-10-07 09:52:11.738222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.186 [2024-10-07 09:52:11.738227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:12.186 [2024-10-07 09:52:11.740626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.186 [2024-10-07 09:52:11.743944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.186 [2024-10-07 09:52:11.749917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.186 [2024-10-07 09:52:11.750360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.186 [2024-10-07 09:52:11.750392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.186 [2024-10-07 09:52:11.750401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.186 [2024-10-07 09:52:11.750565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.186 [2024-10-07 09:52:11.750734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.186 [2024-10-07 09:52:11.750742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.186 [2024-10-07 09:52:11.750747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.186 [2024-10-07 09:52:11.753138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:12.186 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:12.186 [2024-10-07 09:52:11.762569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.186 [2024-10-07 09:52:11.763120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.186 [2024-10-07 09:52:11.763151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.186 [2024-10-07 09:52:11.763160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.186 [2024-10-07 09:52:11.763324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.186 [2024-10-07 09:52:11.763476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.186 [2024-10-07 09:52:11.763482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.186 [2024-10-07 09:52:11.763487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.187 [2024-10-07 09:52:11.765885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.187 [2024-10-07 09:52:11.775187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.187 [2024-10-07 09:52:11.775713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.187 [2024-10-07 09:52:11.775744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.187 [2024-10-07 09:52:11.775754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.187 [2024-10-07 09:52:11.775921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.187 [2024-10-07 09:52:11.776072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.187 [2024-10-07 09:52:11.776079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.187 [2024-10-07 09:52:11.776084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.187 Malloc0 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:12.187 [2024-10-07 09:52:11.778486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:12.187 [2024-10-07 09:52:11.787778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.187 [2024-10-07 09:52:11.788250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.187 [2024-10-07 09:52:11.788281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.187 [2024-10-07 09:52:11.788290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.187 [2024-10-07 09:52:11.788456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.187 [2024-10-07 09:52:11.788607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.187 [2024-10-07 09:52:11.788613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.187 [2024-10-07 09:52:11.788627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:12.187 [2024-10-07 09:52:11.791018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.187 [2024-10-07 09:52:11.800451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.187 [2024-10-07 09:52:11.801085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.187 [2024-10-07 09:52:11.801116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a2a0 with addr=10.0.0.2, port=4420 00:31:12.187 [2024-10-07 09:52:11.801126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203a2a0 is same with the state(6) to be set 00:31:12.187 [2024-10-07 09:52:11.801292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203a2a0 (9): Bad file descriptor 00:31:12.187 [2024-10-07 09:52:11.801443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:12.187 [2024-10-07 09:52:11.801449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:12.187 [2024-10-07 09:52:11.801455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:12.187 [2024-10-07 09:52:11.803851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:12.187 [2024-10-07 09:52:11.808922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.187 [2024-10-07 09:52:11.813138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:12.187 09:52:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3545285 00:31:12.447 [2024-10-07 09:52:11.892582] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:21.119 5068.71 IOPS, 19.80 MiB/s 6043.00 IOPS, 23.61 MiB/s 6825.33 IOPS, 26.66 MiB/s 7438.30 IOPS, 29.06 MiB/s 7947.82 IOPS, 31.05 MiB/s 8375.17 IOPS, 32.72 MiB/s 8736.69 IOPS, 34.13 MiB/s 9058.79 IOPS, 35.39 MiB/s 00:31:21.119 Latency(us) 00:31:21.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.119 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:21.119 Verification LBA range: start 0x0 length 0x4000 00:31:21.119 Nvme1n1 : 15.00 9308.99 36.36 13704.09 0.00 5543.43 542.72 13325.65 00:31:21.119 =================================================================================================================== 00:31:21.119 Total : 9308.99 36.36 13704.09 0.00 5543.43 542.72 13325.65 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:21.119 rmmod nvme_tcp 00:31:21.119 rmmod nvme_fabrics 00:31:21.119 rmmod nvme_keyring 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 3546491 ']' 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 3546491 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' -z 3546491 ']' 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # kill -0 3546491 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # uname 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3546491 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3546491' 00:31:21.119 killing process with pid 3546491 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # kill 3546491 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@977 -- # wait 3546491 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:21.119 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:31:21.379 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:21.379 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:21.379 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.379 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.379 09:52:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.291 09:52:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:23.291 00:31:23.291 real 0m28.596s 00:31:23.291 user 1m3.674s 00:31:23.291 sys 0m7.880s 00:31:23.291 09:52:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:23.291 09:52:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:23.291 ************************************ 00:31:23.291 END TEST nvmf_bdevperf 00:31:23.291 ************************************ 00:31:23.291 09:52:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:23.291 09:52:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:31:23.291 09:52:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:23.291 09:52:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.291 ************************************ 00:31:23.291 START TEST nvmf_target_disconnect 00:31:23.291 ************************************ 00:31:23.291 09:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:23.553 * Looking for test storage... 00:31:23.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1626 -- # lcov --version 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:31:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.553 --rc genhtml_branch_coverage=1 00:31:23.553 --rc genhtml_function_coverage=1 00:31:23.553 --rc genhtml_legend=1 00:31:23.553 --rc geninfo_all_blocks=1 00:31:23.553 --rc geninfo_unexecuted_blocks=1 00:31:23.553 00:31:23.553 ' 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:31:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.553 --rc genhtml_branch_coverage=1 00:31:23.553 --rc genhtml_function_coverage=1 00:31:23.553 --rc genhtml_legend=1 00:31:23.553 --rc geninfo_all_blocks=1 00:31:23.553 --rc geninfo_unexecuted_blocks=1 00:31:23.553 00:31:23.553 ' 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:31:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.553 --rc genhtml_branch_coverage=1 00:31:23.553 --rc genhtml_function_coverage=1 00:31:23.553 --rc genhtml_legend=1 00:31:23.553 --rc geninfo_all_blocks=1 00:31:23.553 --rc geninfo_unexecuted_blocks=1 00:31:23.553 00:31:23.553 ' 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:31:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.553 --rc genhtml_branch_coverage=1 00:31:23.553 --rc genhtml_function_coverage=1 00:31:23.553 --rc genhtml_legend=1 00:31:23.553 --rc geninfo_all_blocks=1 00:31:23.553 --rc geninfo_unexecuted_blocks=1 00:31:23.553 00:31:23.553 ' 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.553 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:23.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:23.554 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:23.816 09:52:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:31.952 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.952 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.952 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.952 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.952 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.952 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.952 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.952 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.952 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.952 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:31.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:31.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:31.953 Found net devices under 0000:31:00.0: cvl_0_0 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:31.953 Found net devices under 0000:31:00.1: cvl_0_1 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:31.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:31:31.953 00:31:31.953 --- 10.0.0.2 ping statistics --- 00:31:31.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.953 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:31:31.953 00:31:31.953 --- 10.0.0.1 ping statistics --- 00:31:31.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.953 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:31.953 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:31.954 ************************************ 00:31:31.954 START TEST nvmf_target_disconnect_tc1 00:31:31.954 ************************************ 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # nvmf_target_disconnect_tc1 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # local es=0 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@647 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@647 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@647 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:31.954 09:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@656 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:31.954 [2024-10-07 09:52:31.111273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.954 [2024-10-07 09:52:31.111336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3dc0 with addr=10.0.0.2, port=4420 00:31:31.954 [2024-10-07 09:52:31.111368] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:31.954 [2024-10-07 09:52:31.111380] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:31.954 [2024-10-07 09:52:31.111389] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:31.954 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:31.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:31.954 Initializing NVMe Controllers 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@656 -- # es=1 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:31:31.954 00:31:31.954 real 0m0.133s 00:31:31.954 user 0m0.047s 00:31:31.954 sys 0m0.086s 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:31.954 ************************************ 00:31:31.954 END TEST nvmf_target_disconnect_tc1 00:31:31.954 ************************************ 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:31.954 ************************************ 00:31:31.954 START TEST nvmf_target_disconnect_tc2 00:31:31.954 ************************************ 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # nvmf_target_disconnect_tc2 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3552745 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3552745 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # '[' -z 3552745 ']' 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local max_retries=100 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@843 -- # xtrace_disable 00:31:31.954 09:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.954 [2024-10-07 09:52:31.270773] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:31.954 [2024-10-07 09:52:31.270835] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.954 [2024-10-07 09:52:31.362400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.954 [2024-10-07 09:52:31.455167] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.954 [2024-10-07 09:52:31.455228] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.954 [2024-10-07 09:52:31.455242] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.954 [2024-10-07 09:52:31.455249] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.954 [2024-10-07 09:52:31.455256] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.954 [2024-10-07 09:52:31.457360] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:31:31.954 [2024-10-07 09:52:31.457898] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:31:31.954 [2024-10-07 09:52:31.458068] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:31:31.954 [2024-10-07 09:52:31.458088] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@867 -- # return 0 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@733 -- # xtrace_disable 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.525 Malloc0 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.525 [2024-10-07 09:52:32.175237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:32.525 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.786 [2024-10-07 09:52:32.215716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3552939 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:32.786 09:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:34.706 09:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3552745 00:31:34.706 09:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:34.706 Read completed with error (sct=0, sc=8) 00:31:34.706 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 [2024-10-07 09:52:34.254762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Read completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 Write completed with error (sct=0, sc=8) 00:31:34.707 starting I/O failed 00:31:34.707 [2024-10-07 09:52:34.255171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:34.707 [2024-10-07 09:52:34.255566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.255592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.256092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.256153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.256524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.256540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.257091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.257152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.257512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.257527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.257899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.257957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.258234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.258249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.258479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.258492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.258881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.258894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.259246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.259258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.259621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.259635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.260016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.260028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.260328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.260340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.260699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.260712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.260905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.260917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.707 [2024-10-07 09:52:34.261153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.707 [2024-10-07 09:52:34.261165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.707 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.261498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.261510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.261885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.261898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.262135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.262146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.262458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.262469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.262812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.262828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.263199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.263211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.263360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.263374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.263595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.263606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.263976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.263989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.264335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.264347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.264692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.264704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.264919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.264931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.265252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.265264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.265495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.265508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.265852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.265864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.266185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.266196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.266538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.266550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.266900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.266912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.267128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.267140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.267472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.267484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.267708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.267721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.268012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.268024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.268386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.268398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.268761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.268773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.269033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.269043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.269342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.269353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.269671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.269682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.270027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.270037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.270340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.270352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.270638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.270649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.270969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.270980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.271295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.271305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.271642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.271654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.271997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.272007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.272338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.272349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.272714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.272727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.273089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.273101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.708 [2024-10-07 09:52:34.273454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.708 [2024-10-07 09:52:34.273464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.708 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.273762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.273773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.274053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.274063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.274484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.274496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.274801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.274812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.275171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.275181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.275520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.275530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.275845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.275859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.276171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.276182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.276412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.276423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.276633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.276647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.276996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.277007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.277187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.277198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.277560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.277571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.277876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.277888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.278176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.278187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.278525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.278537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.278882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.278894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.279235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.279247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.279599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.279610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.279950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.279962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.280265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.280276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.280635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.280647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.280976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.280987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.281379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.281390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.281703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.281715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.282047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.282059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.282291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.282302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.282623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.282634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.282834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.282846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.283136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.283148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.283490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.283501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.283878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.283891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.284217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.284228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.284557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.284571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.284910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.284923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.285224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.285237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.285584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.285598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.286025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.709 [2024-10-07 09:52:34.286039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.709 qpair failed and we were unable to recover it. 00:31:34.709 [2024-10-07 09:52:34.286381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.286395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.286727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.286740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.287134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.287147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.287460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.287473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.287873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.287887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.288202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.288216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.288497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.288509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.288853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.288867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.289165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.289182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.289408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.289420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.289771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.289785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.290103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.290117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.290504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.290518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.290837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.290850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.291172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.291185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.291496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.291509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.291839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.291853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.292199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.292211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.292511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.292525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.292754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.292769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.293091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.293104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.293427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.293440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.293776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.293790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.294155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.294167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.294464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.294477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.294801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.294817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.295136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.295149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.295474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.295488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.295844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.295859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.296164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.296176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.296497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.296510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.296831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.296845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.297164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.297177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.297498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.297517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.297867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.297886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.298215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.298233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.298563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.298587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.298926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.710 [2024-10-07 09:52:34.298944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.710 qpair failed and we were unable to recover it. 00:31:34.710 [2024-10-07 09:52:34.299270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.299289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.299644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.299664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.300004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.300023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.300345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.300364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.300696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.300714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.301049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.301065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.301389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.301406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.301736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.301756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.302094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.302111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.302468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.302486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.302753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.302774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.303119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.303136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.303452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.303469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.303782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.303800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.304167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.304185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.304504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.304521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.304868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.304887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.305218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.305236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.305604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.305629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.305929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.305947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.306282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.306299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.306631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.306649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.306997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.307022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.307383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.307405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.307753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.307776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.308111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.308134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.308508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.308529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.308861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.308886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.309245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.309268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.309585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.309608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.309846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.309870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.310167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.711 [2024-10-07 09:52:34.310188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.711 qpair failed and we were unable to recover it. 00:31:34.711 [2024-10-07 09:52:34.310527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.310549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.310886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.310908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.311225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.311248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.311626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.311649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.311987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.312008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.312405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.312427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.312776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.312799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.313148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.313169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.313526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.313550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.313899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.313922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.314258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.314279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.314637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.314659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.315042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.315063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.315397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.315419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.315674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.315697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.316046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.316068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.316418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.316439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.316769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.316791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.317118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.317144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.317468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.317490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.317746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.317769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.318120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.318151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.318493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.318516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.318778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.318801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.319155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.319176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.319527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.319549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.319775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.319798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.320183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.320206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.320542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.320573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.320928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.320961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.321362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.321391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.321742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.321773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.322165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.322196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.322586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.322615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.322973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.323003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.323300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.323329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.323720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.323750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.324115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.324143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.712 qpair failed and we were unable to recover it. 00:31:34.712 [2024-10-07 09:52:34.324495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.712 [2024-10-07 09:52:34.324525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.324763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.324793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.325181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.325211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.325576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.325604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.325987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.326017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.326359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.326389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.326532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.326565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.326953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.326985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.327342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.327372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.327737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.327768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.328136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.328166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.328531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.328560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.328962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.328993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.329344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.329374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.329742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.329772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.330146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.330175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.330552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.330582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.330948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.330978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.331324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.331353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.331726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.331758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.332129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.332164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.332516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.332546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.332924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.332955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.333259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.333288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.333674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.333728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.334133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.334162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.334517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.334547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.334969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.335000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.335367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.335395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.335764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.335795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.336170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.336199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.336563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.336593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.336966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.336997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.337365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.337394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.337774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.337804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.338161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.338191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.338553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.338583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.338845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.338879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.713 [2024-10-07 09:52:34.339247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.713 [2024-10-07 09:52:34.339277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.713 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.339639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.339670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.340063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.340093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.340450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.340479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.340843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.340873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.341247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.341276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.341661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.341693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.342074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.342103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.342471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.342500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.342869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.342900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.343158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.343188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.343590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.343629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.344051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.344080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.344440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.344469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.344858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.344888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.345269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.345298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.345658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.345689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.346061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.346090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.346440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.346470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.346836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.346867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.347249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.347277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.347640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.347671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.348045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.348082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.348478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.348508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.348879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.348910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.349274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.349304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.349673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.349705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.350079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.350107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.350471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.350500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.350891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.350922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.351278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.351307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.351713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.351744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.352112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.352141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.352508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.352538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.352918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.352949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.353328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.353356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.353733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.353763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.354124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.354153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.714 [2024-10-07 09:52:34.354534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.714 [2024-10-07 09:52:34.354563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.714 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.354906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.354938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.355244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.355273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.355654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.355686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.356076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.356106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.356499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.356528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.356868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.356900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.357262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.357292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.357665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.357695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.358061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.358089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.358431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.358459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.358894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.358926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.359332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.359362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.359734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.359763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.360128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.360157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.360522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.360551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.360886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.360916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.715 [2024-10-07 09:52:34.361339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.715 [2024-10-07 09:52:34.361368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.715 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.361724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.361758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.362134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.362162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.362410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.362439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.362779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.362812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.363181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.363209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.363635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.363666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.364035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.364071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.364325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.364353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.364689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.364720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.365051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.365081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.365452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.365482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.365775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.365805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.366067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.366100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.366460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.366491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.366778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.366808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.367162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.367192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.367569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.367598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.368034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.368064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.368329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.368360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.368655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.368686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.369064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.369094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.369468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.369496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.369858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.369888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.370251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.992 [2024-10-07 09:52:34.370280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.992 qpair failed and we were unable to recover it. 00:31:34.992 [2024-10-07 09:52:34.370666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.370696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.371081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.371111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.371469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.371498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.371900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.371931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.372312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.372341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.372767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.372798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.373152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.373182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.373533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.373562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.373969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.374001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.374371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.374401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.374784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.374814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.375185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.375214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.375606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.375667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.376105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.376136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.376497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.376526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.376920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.376950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.377253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.377283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.377702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.377732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.378089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.378119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.378481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.378509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.378746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.378777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.379151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.379180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.379549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.379578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.379972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.380003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.380373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.380401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.380666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.380702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.381008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.381038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.381419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.381448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.381797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.381827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.382168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.382196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.382544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.382574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.382875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.382906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.383275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.383306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.383665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.383696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.384082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.384111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.384481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.384510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.384909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.384941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.385300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.385330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.993 qpair failed and we were unable to recover it. 00:31:34.993 [2024-10-07 09:52:34.385683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.993 [2024-10-07 09:52:34.385714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.386103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.386132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.386489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.386517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.386847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.386877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.387233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.387262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.387635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.387665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.390238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.390310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.390701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.390743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.391093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.391123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.391476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.391505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.391895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.391927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.392287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.392326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.392685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.392717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.393116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.393146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.393502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.393533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.393915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.393947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.394300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.394330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.394694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.394726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.394971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.395002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.395255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.395286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.395640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.395671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.395935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.395963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.396330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.396360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.396708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.396739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.397070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.397101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.397356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.397386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.399669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.399740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.400179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.400215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.400652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.400684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.401086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.401115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.401415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.401444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.401784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.401815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.402187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.402217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.402474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.402507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.402921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.402956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.403319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.403350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.403595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.403643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.404018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.404048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.404417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.994 [2024-10-07 09:52:34.404449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.994 qpair failed and we were unable to recover it. 00:31:34.994 [2024-10-07 09:52:34.404691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.404726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.406658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.406729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.407040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.407075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.407368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.407403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.407808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.407840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.408211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.408242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.408607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.408653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.409041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.409070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.409433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.409464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.409721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.409753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.410134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.410165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.410459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.410493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.410871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.410912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.411315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.411345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.411586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.411629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.411991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.412023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.412283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.412313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.412673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.412704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.413082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.413112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.413484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.413514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.413860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.413891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.414274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.414306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.414714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.414744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.415114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.415143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.415407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.415437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.415841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.415871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.416234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.416263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.416636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.416666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.417044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.417074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.417436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.417465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.417757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.417788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.418015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.418048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.418301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.418329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.418678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.418710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.419057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.419088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.419447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.419476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.419862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.419893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.420262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.995 [2024-10-07 09:52:34.420291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-10-07 09:52:34.420654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.420685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.421079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.421108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.421481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.421510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.421935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.421966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.422315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.422344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.422728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.422759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.423127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.423157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.423515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.423544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.423912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.423943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.424307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.424335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.424693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.424724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.425075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.425105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.425454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.425483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.425853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.425884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.426250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.426286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.426648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.426678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.427106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.427136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.427472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.427502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.427657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.427690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.428033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.428063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.428404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.428434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.428676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.428712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.429097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.429127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.429499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.429528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.429881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.429912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.430278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.430308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.430678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.430708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.431082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.431111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.431479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.431508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.431886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.431916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.432279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.432308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.432672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.432702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-10-07 09:52:34.433050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.996 [2024-10-07 09:52:34.433079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.433434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.433463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.433803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.433834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.434199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.434227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.434590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.434648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.435059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.435089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.435431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.435460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.435811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.435840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.436161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.436190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.436554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.436584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.436982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.437011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.437380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.437409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.437657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.437710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.438093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.438123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.438492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.438522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.438890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.438920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.439324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.439353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.439694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.439723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.440075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.440104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.440472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.440500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.440964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.440994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.441316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.441345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.441738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.441776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.442140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.442170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.442516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.442545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.442925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.442955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.443200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.443229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.443604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.443645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.443992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.444021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.444384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.444413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.444760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.444790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.445158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.445186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.445557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.445586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.445876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.445905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.446267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.446296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.446661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.446693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.447090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.447120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.447483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.447512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-10-07 09:52:34.447805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.997 [2024-10-07 09:52:34.447834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.448215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.448244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.448612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.448652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.449009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.449037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.449415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.449444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.449787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.449818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.450181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.450209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.450555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.450584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.450984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.451015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.451367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.451396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.451755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.451787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.452153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.452182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.452539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.452568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.452796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.452827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.453178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.453206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.453571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.453601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.453939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.453969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.454206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.454238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.454597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.454647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.455017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.455046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.455423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.455452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.455802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.455833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.456205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.456234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.456520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.456549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.456905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.456943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.457340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.457368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.457739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.457769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.458145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.458175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.458541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.458570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.458939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.458970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.459326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.459355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.459776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.459808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.460171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.460207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.460635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.460666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.461014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.461045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.461427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.461456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.461704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.461733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.462107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.462138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.462493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.998 [2024-10-07 09:52:34.462524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.998 qpair failed and we were unable to recover it. 00:31:34.998 [2024-10-07 09:52:34.462918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.462949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.463311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.463342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.463703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.463735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.464071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.464099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.464435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.464464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.464846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.464876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.465226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.465255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.465636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.465668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.466030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.466062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.466498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.466529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.466863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.466893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.467261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.467291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.467660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.467691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.468077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.468107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.468476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.468506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.468882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.468913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.469281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.469311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.469523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.469556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.469947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.469979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.470327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.470357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.470695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.470726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.470978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.471007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.471281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.471311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.471667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.471698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.471938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.471968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.472309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.472345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.472719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.472750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.473104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.473133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.473498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.473528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.473915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.473945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.474307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.474337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.474569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.474600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.474885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.474920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.475300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.475331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.475702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.475734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.476123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.476152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.476514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.476544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.476900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.476931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:34.999 qpair failed and we were unable to recover it. 00:31:34.999 [2024-10-07 09:52:34.477200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.999 [2024-10-07 09:52:34.477230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.477588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.477630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.477983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.478013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.478386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.478414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.478770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.478801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.479158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.479189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.479542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.479571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.479798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.479829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.480213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.480243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.480594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.480638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.480892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.480922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.481270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.481301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.481647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.481678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.482044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.482074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.482465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.482494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.482845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.482876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.483242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.483271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.483647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.483677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.483921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.483954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.484246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.484275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.484636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.484668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.485066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.485095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.485453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.485482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.485750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.485783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.486160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.486190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.486562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.486592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.486966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.486998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.487379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.487417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.487664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.487698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.488045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.488075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.488449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.488479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.488844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.488875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.489244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.489274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.489706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.489738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.490043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.490072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.490321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.490353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.490637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.490669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.490922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.490952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.000 [2024-10-07 09:52:34.491322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.000 [2024-10-07 09:52:34.491352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.000 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.491695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.491727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.492084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.492117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.492487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.492517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.492876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.492909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.493282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.493312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.493681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.493712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.493955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.493985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.494338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.494369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.494736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.494768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.495104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.495133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.495513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.495543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.495900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.495932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.496268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.496297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.496654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.496686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.497071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.497101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.497464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.497495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.497868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.497899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.498295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.498325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.498685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.498714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.499106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.499136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.499435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.499466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.499829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.499861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.500227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.500258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.500637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.500668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.501030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.501060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.501434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.501463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.501840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.501870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.502231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.502260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.502492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.502530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.502928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.502958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.503314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.503344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.503702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.503731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.504138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.504167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.001 qpair failed and we were unable to recover it. 00:31:35.001 [2024-10-07 09:52:34.504544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.001 [2024-10-07 09:52:34.504573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.504953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.504982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.505355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.505386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.505772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.505803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.506176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.506207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.506579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.506609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.507052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.507084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.507496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.507525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.507962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.507994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.508346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.508384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.508761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.508792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.509068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.509097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.509443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.509472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.509892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.509923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.510282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.510311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.510677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.510707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.511105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.511134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.511484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.511513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.511758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.511791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.512146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.512177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.512537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.512566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.513026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.513056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.513433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.513463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.513853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.513884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.514308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.514337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.514696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.514726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.515080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.515110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.515493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.515522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.515769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.515798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.516121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.516151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.516585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.516614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.516873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.516902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.517297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.517327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.517683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.517713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.518092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.518121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.518538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.518573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.518879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.518909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.519058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.519089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.002 [2024-10-07 09:52:34.519443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.002 [2024-10-07 09:52:34.519473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.002 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.519752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.519784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.520060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.520089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.520367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.520396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.520768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.520799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.521169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.521199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.521436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.521468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.521817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.521848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.522266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.522295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.522657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.522687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.523133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.523163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.523615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.523658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.524001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.524030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.524286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.524315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.524704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.524735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.525117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.525146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.525526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.525555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.525805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.525836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.526189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.526219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.526634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.526665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.527055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.527084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.527316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.527347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.527602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.527655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.528044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.528074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.528499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.528529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.528946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.528977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.529219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.529248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.529635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.529667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.530044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.530073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.530326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.530355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.530780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.530811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.531186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.531214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.531568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.531598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.531905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.531935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.532166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.532197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.532457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.532488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.532849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.532879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.533112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.533148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.003 [2024-10-07 09:52:34.533517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.003 [2024-10-07 09:52:34.533547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.003 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.533924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.533954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.534326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.534356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.534659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.534691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.535077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.535106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.535484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.535513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.535914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.535945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.536394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.536423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.536803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.536834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.537203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.537232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.537600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.537640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.537999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.538028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.538406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.538435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.538791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.538822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.539203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.539232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.539604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.539653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.540017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.540046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.540306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.540335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.540748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.540779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.541151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.541179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.541546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.541575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.541968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.541998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.542383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.542412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.542854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.542884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.543249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.543279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.543532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.543562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.543951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.543982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.544345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.544376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.544764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.544794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.545162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.545193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.545575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.545603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.545967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.545997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.546355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.546385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.546650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.546681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.546926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.546958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.547328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.547357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.547635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.547665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.548035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.548064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.548334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.004 [2024-10-07 09:52:34.548363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.004 qpair failed and we were unable to recover it. 00:31:35.004 [2024-10-07 09:52:34.548732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.548771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.549146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.549175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.549403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.549435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.549805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.549839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.550216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.550248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.550613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.550660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.551040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.551071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.551335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.551367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.551753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.551786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.552148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.552181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.552544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.552575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.552944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.552976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.553361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.553391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.553770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.553801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.554187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.554218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.554458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.554491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.554864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.554895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.555218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.555250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.555590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.555634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.556006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.556037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.556414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.556444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.556694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.556725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.557131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.557160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.557408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.557436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.557799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.557830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.558211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.558241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.558466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.558494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.558784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.558816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.559073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.559102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.559489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.559518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.559784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.559814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.560179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.560208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.560447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.560478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.560846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.005 [2024-10-07 09:52:34.560877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.005 qpair failed and we were unable to recover it. 00:31:35.005 [2024-10-07 09:52:34.561125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.561153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.561491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.561521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.561895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.561927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.562291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.562321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.562685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.562716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.562960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.562991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.563362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.563399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.563798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.563829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.564199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.564228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.564609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.564653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.565038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.565068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.565448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.565478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.565881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.565913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.566307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.566337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.566678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.566708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.567061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.567092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.567215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.567247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.567391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.567420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.567676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.567707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.568094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.568123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.568409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.568438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.568675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.568709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.569079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.569108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.569476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.569505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.569896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.569927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.570302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.570332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.570581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.570610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.571011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.571040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.571275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.571303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.571712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.571743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.572114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.572152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.572527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.572555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.572882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.572912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.573308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.573338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.573715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.573747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.574151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.574180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.574546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.574575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.574977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.575007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.006 qpair failed and we were unable to recover it. 00:31:35.006 [2024-10-07 09:52:34.575354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.006 [2024-10-07 09:52:34.575393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.575741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.575770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.576141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.576170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.576553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.576582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.576955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.576985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.577333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.577363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.577724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.577755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.578118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.578147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.578530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.578564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.578939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.578969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.579328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.579356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.579748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.579778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.580131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.580160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.580396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.580427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.580648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.580680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.581063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.581092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.581452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.581481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.581847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.581878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.582237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.582266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.582498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.582529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.582907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.582937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.583324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.583355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.583774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.583804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.584184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.584212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.584461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.584491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.584849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.584879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.585222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.585251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.585601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.585643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.586026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.586054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.586425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.586454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.586822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.586853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.587221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.587250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.587614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.587654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.588021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.588051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.588423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.588452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.588804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.588841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.589066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.589097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.589437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.589467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.007 qpair failed and we were unable to recover it. 00:31:35.007 [2024-10-07 09:52:34.589841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.007 [2024-10-07 09:52:34.589872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.590241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.590270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.590727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.590758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.591147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.591177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.591543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.591572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.591888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.591919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.592277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.592308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.592687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.592718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.593018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.593055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.593465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.593494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.593851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.593882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.594251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.594280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.594640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.594672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.595055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.595084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.595453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.595482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.595833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.595862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.596117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.596146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.596478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.596506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.596864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.596895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.597149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.597177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.597525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.597554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.597927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.597958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.598321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.598351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.598600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.598646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.599026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.599056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.599305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.599333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.599647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.599679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.600025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.600055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.600294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.600322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.600685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.600714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.601084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.601113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.601483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.601512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.601879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.601909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.602272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.602301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.602665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.602696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.603078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.603107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.603469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.603498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.603852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.603888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.604193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.008 [2024-10-07 09:52:34.604222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.008 qpair failed and we were unable to recover it. 00:31:35.008 [2024-10-07 09:52:34.604651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.604682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.604960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.604990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.605369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.605398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.605762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.605794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.606156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.606184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.606545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.606575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.606949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.606979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.607340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.607371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.607738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.607769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.608149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.608179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.608554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.608582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.608994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.609025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.609266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.609295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.609544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.609572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.609948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.609979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.610340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.610370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.610748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.610777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.611138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.611167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.611538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.611568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.611829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.611860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.612233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.612262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.612656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.612687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.613046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.613083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.613505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.613534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.613808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.613838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.614193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.614222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.614577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.614607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.614999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.615028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.615275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.615306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.615675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.615705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.616111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.616140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.616552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.616581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.616998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.617030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.617375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.617405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.617791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.617822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.618195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.618224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.618468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.618497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.618742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.009 [2024-10-07 09:52:34.618776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.009 qpair failed and we were unable to recover it. 00:31:35.009 [2024-10-07 09:52:34.619147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.619190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.619525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.619554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.619932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.619962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.620316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.620345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.620695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.620725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.621101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.621130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.621475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.621504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.621952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.621983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.622257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.622285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.622661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.622692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.623055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.623085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.623439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.623469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.623844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.623875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.624236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.624266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.624647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.624678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.625038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.625068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.625319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.625351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.625701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.625733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.626081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.626111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.626391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.626419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.626786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.626816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.627161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.627197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.627535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.627564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.627946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.627977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.628355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.628383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.628729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.628759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.629120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.629149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.629502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.629530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.629917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.629948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.630298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.630327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.630671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.630701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.631074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.631103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.631456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.631486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.010 [2024-10-07 09:52:34.631741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.010 [2024-10-07 09:52:34.631770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.010 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.632014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.632044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.632279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.632309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.632672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.632703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.633078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.633107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.633486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.633515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.633875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.633908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.634278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.634312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.634561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.634594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.634971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.635002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.635351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.635380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.635747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.635777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.636120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.636149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.636554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.636583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.636961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.636992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.637367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.637396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.637760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.637791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.638162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.638191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.011 [2024-10-07 09:52:34.638559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.011 [2024-10-07 09:52:34.638589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.011 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.638958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.638990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.639346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.639376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.639739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.639770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.640137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.640167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.640542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.640570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.640993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.641025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.641356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.641385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.641750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.641781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.642143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.642172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.642544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.642573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.642957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.642987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.643360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.643389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.643755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.643785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.644161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.644190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.644545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.644575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.644863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.644894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.645134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.645163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.645516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.645545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.645907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.645938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.646307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.646337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.646604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.646646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.646993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.647024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.647414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.647444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.647806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.647836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.648201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.648229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.648594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.648636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.649019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.649048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.649411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.649440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.649818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.649855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.650111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.650141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.650516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.650546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.650886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.650916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.651280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.651309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.651739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.651771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.290 qpair failed and we were unable to recover it. 00:31:35.290 [2024-10-07 09:52:34.652156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.290 [2024-10-07 09:52:34.652185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.652553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.652582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.653007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.653038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.653397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.653425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.653778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.653810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.654176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.654205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.654592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.654630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.655001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.655030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.655389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.655418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.655784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.655814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.656065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.656097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.656458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.656488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.656848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.656879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.657241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.657270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.657648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.657679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.658044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.658073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.658420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.658450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.658698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.658728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.659094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.659123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.659376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.659405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.659741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.659771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.660154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.660184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.660549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.660577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.660936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.660967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.661211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.661240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.661659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.661690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.662097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.662126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.662469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.662499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.662853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.662883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.663275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.663305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.663680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.663710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.664095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.664125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.664504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.664533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.664916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.664947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.665328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.665362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.665711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.665743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.666125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.666155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.291 qpair failed and we were unable to recover it. 00:31:35.291 [2024-10-07 09:52:34.666514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.291 [2024-10-07 09:52:34.666543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.666816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.666846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.667206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.667234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.667604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.667643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.668004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.668033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.668391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.668420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.668855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.668886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.669129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.669158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.669498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.669527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.669903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.669935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.670300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.670329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.670680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.670712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.671089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.671118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.671482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.671511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.671864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.671894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.672245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.672275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.672640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.672671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.673046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.673075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.673436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.673466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.673846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.673877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.674214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.674244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.674643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.674675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.675059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.675088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.675457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.675486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.675859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.675890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.676294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.676322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.676578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.676609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.676989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.677020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.677397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.677426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.677790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.677820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.678165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.678195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.678555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.678585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.678975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.679005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.679264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.679292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.679646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.679676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.680045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.680075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.680442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.680471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.680838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.680876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.292 qpair failed and we were unable to recover it. 00:31:35.292 [2024-10-07 09:52:34.681183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.292 [2024-10-07 09:52:34.681212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.681591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.681632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.681873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.681901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.682262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.682290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.682670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.682701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.682967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.682996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.683371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.683400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.683747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.683777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.684147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.684175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.684431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.684463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.684846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.684877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.685233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.685262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.685638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.685668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.685941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.685973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.686330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.686360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.686729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.686760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.687120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.687149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.687527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.687556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.687919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.687949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.688317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.688346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.688714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.688744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.689127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.689157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.689451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.689480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.689848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.689877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.690238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.690266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.690639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.690671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.691025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.691055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.691416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.691446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.691808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.691839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.692240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.692268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.692645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.692676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.693041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.693071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.693315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.693346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.693723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.693754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.694134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.694163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.694520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.694549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.694895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.694927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.695294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.695323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.293 qpair failed and we were unable to recover it. 00:31:35.293 [2024-10-07 09:52:34.695750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.293 [2024-10-07 09:52:34.695780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.696180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.696223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.696566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.696596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.697006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.697037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.697382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.697411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.697666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.697696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.698086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.698114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.698486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.698515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.698772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.698802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.699127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.699156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.699534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.699562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.699923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.699953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.700305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.700334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.700599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.700648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.701017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.701047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.701361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.701390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.701745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.701776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.702129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.702157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.702527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.702555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.702799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.702833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.703184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.703213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.703464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.703492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.703870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.703901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.704235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.704264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.704635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.704666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.704901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.704930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.705281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.705309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.705656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.705688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.706096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.706126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.706475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.706505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.706857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.706887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.294 [2024-10-07 09:52:34.707234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.294 [2024-10-07 09:52:34.707265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.294 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.707627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.707658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.707933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.707961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.708326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.708354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.708737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.708767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.709127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.709156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.709544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.709572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.709833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.709866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.710124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.710154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.710500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.710528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.710912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.710950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.711320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.711349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.711733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.711764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.712003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.712031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.712429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.712458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.712759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.712789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.713131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.713160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.713519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.713548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.713928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.713958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.714309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.714338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.714691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.714722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.715113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.715143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.715502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.715531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.715923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.715953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.716330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.716360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.716703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.716734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.717108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.717138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.717519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.717547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.717901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.717931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.718290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.718319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.718584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.718612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.719008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.719038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.719298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.719326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.719696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.719727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.719987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.720015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.720371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.720400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.720766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.720798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.721229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.721259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.721625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.295 [2024-10-07 09:52:34.721655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.295 qpair failed and we were unable to recover it. 00:31:35.295 [2024-10-07 09:52:34.722010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.722039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.722416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.722446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.722754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.722784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.723159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.723188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.723550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.723580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.723847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.723878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.724269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.724299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.724683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.724714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.725076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.725105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.725460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.725489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.725843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.725874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.726241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.726277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.726610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.726649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.726945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.726975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.727336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.727365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.727747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.727776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.728126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.728155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.728531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.728560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.728990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.729021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.729394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.729423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.729776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.729807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.730027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.730055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.730373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.730403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.730784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.730815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.731190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.731218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.731475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.731504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.731783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.731813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.732194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.732223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.732588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.732641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.732995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.733025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.733392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.733421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.733774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.733805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.734175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.734204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.734639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.734670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.735033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.735061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.735426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.735455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.735805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.735837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.736105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.296 [2024-10-07 09:52:34.736134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.296 qpair failed and we were unable to recover it. 00:31:35.296 [2024-10-07 09:52:34.736470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.736501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.736893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.736923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.737256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.737285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.737535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.737564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.737996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.738027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.738446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.738475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.738816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.738847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.739241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.739270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.739638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.739668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.740024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.740053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.740429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.740458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.740822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.740853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.741213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.741243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.741636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.741672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.742036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.742065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.742466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.742495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.742851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.742881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.743244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.743272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.743637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.743667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.744012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.744042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.744405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.744435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.744784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.744815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.745176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.745205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.745575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.745605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.746011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.746040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.746300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.746328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.746712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.746742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.747089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.747120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.747478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.747507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.747900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.747930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.748290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.748318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.748696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.748726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.749095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.749124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.749515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.749544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.749932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.297 [2024-10-07 09:52:34.749962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.297 qpair failed and we were unable to recover it. 00:31:35.297 [2024-10-07 09:52:34.750383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.750412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.750772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.750803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.751159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.751189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.751542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.751571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.751947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.751977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.752356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.752387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.752637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.752669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.753001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.753031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.753391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.753421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.753789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.753819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.754199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.754227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.754472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.754505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.754902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.754934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.755290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.755319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.755592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.755631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.755972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.756001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.756352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.756381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.756740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.756770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.757149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.757185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.757529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.757559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.757945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.757976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.758319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.758348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.758595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.758650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.759033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.759063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.298 [2024-10-07 09:52:34.759447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.298 [2024-10-07 09:52:34.759476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.298 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.759846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.759877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.760248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.760279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.760548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.760577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.760999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.761030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.761326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.761354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.761821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.761852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.762257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.762286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.762672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.762703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.763070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.763099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.763466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.763497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.763858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.763889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.764243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.764274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.764522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.764553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.764950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.764981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.765342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.765372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.765625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.765657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.765918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.765947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.766312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.766342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.766731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.766761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.767115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.767147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.767507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.767538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.767916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.767946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.299 qpair failed and we were unable to recover it. 00:31:35.299 [2024-10-07 09:52:34.768193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.299 [2024-10-07 09:52:34.768221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.768600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.768640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.768979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.769010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.769356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.769385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.769737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.769767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.770160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.770189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.770449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.770481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.770745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.770776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.771016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.771044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.771440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.771469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.771818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.771850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.772240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.772275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.772629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.772666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.773079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.773109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.773453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.773482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.773802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.773833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.774195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.774225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.774487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.774516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.774880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.774912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.775293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.775322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.775586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.775614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.775792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.775823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.776209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.300 [2024-10-07 09:52:34.776238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.300 qpair failed and we were unable to recover it. 00:31:35.300 [2024-10-07 09:52:34.776606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.776647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.777002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.777031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.777420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.777450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.777804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.777836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.778224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.778253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.778486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.778514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.778951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.778982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.779348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.779377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.779770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.779800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.780251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.780280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.780533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.780562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.780936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.780966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.781327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.781357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.781638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.781669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.782049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.782078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.782418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.782453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.782827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.782859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.783117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.783150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.783511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.783540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.783917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.783948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.784340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.784368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.784761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.784791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.785153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.785182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.785563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.785592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.301 [2024-10-07 09:52:34.785963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.301 [2024-10-07 09:52:34.785993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.301 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.786331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.786361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.786737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.786768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.787222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.787252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.787648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.787679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.787986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.788014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.788405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.788434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.788796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.788827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.789206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.789235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.789602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.789642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.789997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.790027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.790399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.790429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.790777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.790809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.791161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.791190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.791559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.791589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.791830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.791860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.792262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.792290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.792654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.792685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.792939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.792969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.793353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.793382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.793812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.793843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.794201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.794230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.794611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.794661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.794952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.794980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.795223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.795254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.795630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.795660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.796022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.302 [2024-10-07 09:52:34.796051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.302 qpair failed and we were unable to recover it. 00:31:35.302 [2024-10-07 09:52:34.796282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.796311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.796698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.796728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.797083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.797112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.797545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.797575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.797859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.797897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.798292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.798322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.798547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.798581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.799011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.799043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.799384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.799414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.799527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.799556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.799819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.799850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.800202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.800232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.800596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.800640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.800908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.800937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.801234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.801263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.801705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.801736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.802108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.802137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.802479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.802508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.802756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.802787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.803042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.803071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.803451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.803479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.803839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.803870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.804120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.804148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.804519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.804548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.804916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.804946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.303 qpair failed and we were unable to recover it. 00:31:35.303 [2024-10-07 09:52:34.805323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.303 [2024-10-07 09:52:34.805351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.805741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.805772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.806038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.806067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.806414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.806443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.806739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.806770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.807140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.807179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.807582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.807612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.807852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.807881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.808264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.808293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.808558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.808589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.808997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.809027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.809378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.809408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.809647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.809678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.810036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.810065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.810416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.810445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.810709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.810739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.810988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.811020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.811353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.811382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.811735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.811766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.812006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.812042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.812336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.812364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.812602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.812642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.812851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.812880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.813247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.813276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.813649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.813680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.814047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.814076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.814294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.814322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.814706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.814736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.815104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.815133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.815385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.815415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.815780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.815810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.816173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.816202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.816565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.816593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.816968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.816998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.817358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.817387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.817764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.304 [2024-10-07 09:52:34.817795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.304 qpair failed and we were unable to recover it. 00:31:35.304 [2024-10-07 09:52:34.818167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.818196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.818568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.818597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.818995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.819025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.819392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.819421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.819692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.819722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.820125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.820154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.820586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.820615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.820986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.821015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.821175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.821204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.821560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.821589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.822027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.822057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.822413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.822443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.822825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.822856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.823218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.823248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.823636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.823666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.824057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.824086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.824437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.824467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.824842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.824872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.825238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.825268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.825635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.825666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.825935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.825964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.826388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.826417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.826760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.826791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.827028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.827071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.827422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.827452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.827841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.827871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.828223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.828252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.828625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.828655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.829026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.829055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.829396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.829426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.829769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.829799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.830067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.830095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.830442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.830472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.830839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.305 [2024-10-07 09:52:34.830872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.305 qpair failed and we were unable to recover it. 00:31:35.305 [2024-10-07 09:52:34.831247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.831276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.831627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.831659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.832031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.832060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.832431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.832461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.832831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.832861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.833225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.833254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.833636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.833666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.834034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.834063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.834435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.834464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.834862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.834894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.835268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.835296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.835662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.835691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.836089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.836118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.836478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.836507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.836805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.836835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.837213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.837242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.837627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.837658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.837998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.838026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.838388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.838416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.838791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.838823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.839201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.839229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.839593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.839632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.839984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.840014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.840378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.840406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.840769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.840799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.841241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.841270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.841629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.841660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.842006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.842034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.842404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.842434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.842839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.842877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.843250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.843279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.843534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.306 [2024-10-07 09:52:34.843562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.306 qpair failed and we were unable to recover it. 00:31:35.306 [2024-10-07 09:52:34.843938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.843969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.844332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.844361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.844709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.844739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.845106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.845135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.845514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.845542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.845889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.845919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.846275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.846305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.846675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.846706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.847087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.847116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.847477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.847506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.847866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.847896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.848259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.848289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.848657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.848688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.849055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.849083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.849441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.849471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.849854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.849884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.850253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.850283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.850715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.850746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.851100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.851130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.851460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.851488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.851882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.851913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.852269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.852300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.852543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.852572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.852940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.852970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.853351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.853379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.853743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.853773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.854155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.854184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.854613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.854663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.854998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.855027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.855392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.855421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.855785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.855817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.856106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.856135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.856539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.856567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.856951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.856981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.857345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.857374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.857744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.857775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.858170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.858200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.858557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.307 [2024-10-07 09:52:34.858592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.307 qpair failed and we were unable to recover it. 00:31:35.307 [2024-10-07 09:52:34.858964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.858993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.859405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.859436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.859820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.859851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.860124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.860152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.860519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.860548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.860906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.860937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.861305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.861333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.861681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.861711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.862185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.862219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.862563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.862592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.862987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.863018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.863428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.863457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.863883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.863915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.864286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.864315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.864730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.864760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.865109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.865138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.865510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.865539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.865913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.865943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.866307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.866335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.866718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.866748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.867129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.867157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.867526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.867555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.867914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.867945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.868330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.868359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.868728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.868758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.869092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.869123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.869501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.869530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.869868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.869897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.870265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.870294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.870666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.870697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.871060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.871089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.871461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.871490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.871845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.871874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.872230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.872259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.872604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.872643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.873013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.873042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.873290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.873322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.873587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.873625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.308 qpair failed and we were unable to recover it. 00:31:35.308 [2024-10-07 09:52:34.873999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.308 [2024-10-07 09:52:34.874028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.874392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.874428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.874796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.874826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.875187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.875216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.875574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.875604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.875995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.876024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.876390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.876419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.876916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.876947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.877350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.877380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.877685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.877715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.878123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.878154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.878406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.878435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.878799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.878829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.879190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.879219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.879601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.879638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.879981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.880011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.880372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.880402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.880753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.880783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.881047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.881076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.881435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.881466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.881698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.881730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.882122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.882152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.882517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.882545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.882938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.882968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.883326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.883356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.883711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.883741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.884000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.884029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.884290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.884319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.884614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.884657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.885029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.885058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.885407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.885437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.885800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.885830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.886215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.886244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.886487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.886519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.886773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.886807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.887077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.887107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.887517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.887546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.887913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.887945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.309 [2024-10-07 09:52:34.888310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.309 [2024-10-07 09:52:34.888339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.309 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.888708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.888737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.889109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.889138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.889505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.889540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.889880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.889911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.890259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.890288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.890647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.890679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.891036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.891065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.891425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.891454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.891799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.891829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.892204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.892234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.892598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.892640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.892963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.892992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.893364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.893394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.893742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.893773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.894119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.894147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.894516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.894544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.894806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.894836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.895201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.895230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.895470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.895502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.895765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.895796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.896166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.896194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.896565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.896594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.896897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.896927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.897313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.897342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.897704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.897735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.898109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.898137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.898499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.898529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.898871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.898901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.899241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.899270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.899642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.899674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.900063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.900092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.900465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.900494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.900894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.310 [2024-10-07 09:52:34.900925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.310 qpair failed and we were unable to recover it. 00:31:35.310 [2024-10-07 09:52:34.901302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.901330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.901579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.901609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.901860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.901891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.902250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.902279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.902654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.902685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.903074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.903103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.903475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.903504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.903873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.903904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.904256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.904285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.904653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.904694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.905054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.905082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.905449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.905477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.905887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.905917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.906296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.906325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.906672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.906702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.907062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.907091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.907459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.907490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.907869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.907899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.908248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.908276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.908653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.908683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.909105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.909134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.909496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.909524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.909917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.909947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.910321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.910350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.910584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.910637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.911017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.911047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.911419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.911448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.911703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.911738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.912020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.912048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.912422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.912451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.912821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.912852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.913100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.913130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.913536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.913565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.913951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.913981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.914357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.914386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.914798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.914828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.915079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.915108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.915458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.311 [2024-10-07 09:52:34.915486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.311 qpair failed and we were unable to recover it. 00:31:35.311 [2024-10-07 09:52:34.915886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.915917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.916274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.916302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.916704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.916735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.917089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.917117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.917491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.917520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.917854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.917885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.918244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.918273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.918627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.918657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.918927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.918959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.919320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.919349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.919711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.919742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.919974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.920011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.920425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.920454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.920747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.920778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.921161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.921190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.921493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.921522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.921859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.921889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.922258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.922287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.922652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.922683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.923039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.923068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.923453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.923482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.923856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.923887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.924242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.924272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.924653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.924684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.924846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.924878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.925273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.925302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.925653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.925684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.926047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.926076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.926483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.926512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.926861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.926892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.927254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.927283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.927727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.927757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.928122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.928153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.928506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.928534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.928919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.928949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.929306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.929335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.929711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.929741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.930036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.312 [2024-10-07 09:52:34.930065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.312 qpair failed and we were unable to recover it. 00:31:35.312 [2024-10-07 09:52:34.930452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.930483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.930853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.930883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.931262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.931291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.931639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.931670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.932001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.932030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.932404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.932433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.932814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.932845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.933213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.933241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.933455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.933485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.933870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.933902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.934262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.934290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.313 [2024-10-07 09:52:34.934645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.313 [2024-10-07 09:52:34.934676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.313 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.935022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.935057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.935390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.935425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.935808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.935839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.936199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.936228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.936590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.936646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.937008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.937039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.937416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.937445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.937803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.937836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.938212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.938241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.938612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.938654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.939017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.939046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.939395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.939423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.939762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.939792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.940230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.586 [2024-10-07 09:52:34.940258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.586 qpair failed and we were unable to recover it. 00:31:35.586 [2024-10-07 09:52:34.940613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.940668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.941076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.941105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.941467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.941495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.941726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.941759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.942134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.942163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.942520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.942558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.942841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.942871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.943235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.943265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.943565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.943593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.943982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.944012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.944382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.944411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.944765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.944796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.945180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.945209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.945571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.945599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.946020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.946051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.946294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.946325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.946701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.946733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.947082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.947112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.947485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.947514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.947772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.947804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.948067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.948099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.948476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.948505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.948851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.948881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.949236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.949265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.949642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.949673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.950020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.950049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.950296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.950326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.950694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.950732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.951099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.951129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.951372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.951401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.951760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.951791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.952154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.952183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.952567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.952596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.952978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.953008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.953368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.953397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.953769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.953799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.954141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.954170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.954535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.954564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.954970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.955001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.955354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.955382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.955819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.955850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.956243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.956273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.956649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.956680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.957069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.587 [2024-10-07 09:52:34.957098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.587 qpair failed and we were unable to recover it. 00:31:35.587 [2024-10-07 09:52:34.957507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.957535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.957927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.957958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.958322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.958351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.958711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.958741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.959095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.959125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.959481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.959510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.959743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.959775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.960134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.960163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.960536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.960566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.960825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.960855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.961237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.961267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.961460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.961488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.961852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.961882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.962251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.962279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.962663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.962694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.963079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.963108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.963482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.963512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.963725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.963756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.964118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.964147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.964501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.964531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.964866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.964897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.965254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.965284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.965653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.965683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.966063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.966098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.966467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.966496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.966843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.966874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.967137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.967166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.967527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.967556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.967821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.967851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.968221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.968250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.968608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.968648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.968910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.968939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.969278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.969307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.969677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.969706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.970080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.970109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.970474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.970503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.970844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.970874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.971234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.588 [2024-10-07 09:52:34.971263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.588 qpair failed and we were unable to recover it. 00:31:35.588 [2024-10-07 09:52:34.971640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.971672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.972034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.972063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.972441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.972469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.972819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.972850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.973260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.973289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.973658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.973688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.974077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.974105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.974447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.974475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.974875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.974906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.975264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.975293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.975675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.975704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.976056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.976093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.976467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.976503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.976859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.976891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.977258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.977287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.977553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.977582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.977993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.978025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.978362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.978392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.978649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.978680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.979050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.979078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.979312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.979344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.979738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.979769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.980138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.980167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.980538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.980566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.980987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.981019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.981371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.981400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.981675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.981706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.982062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.982092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.982445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.982474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.982738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.982767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.983158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.983187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.983493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.983522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.983864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.983895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.984264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.984294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.984686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.984716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.985064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.985093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.985239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.985267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.985638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.985670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.589 qpair failed and we were unable to recover it. 00:31:35.589 [2024-10-07 09:52:34.986019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.589 [2024-10-07 09:52:34.986047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.986421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.986450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.986839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.986870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.987251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.987282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.987647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.987677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.988046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.988075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.988453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.988482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.988726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.988760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.989137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.989167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.989571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.989600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.990022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.990051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.990409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.990438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.990808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.990838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.991204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.991233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.991608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.991653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.992025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.992055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.992418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.992447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.992804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.992841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.993206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.993236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.993598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.993638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.993991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.994020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.994373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.994402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.994785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.994815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.995164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.995192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.995549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.995578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.995947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.995977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.996383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.996412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.996770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.996801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.997043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.997076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.997417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.997447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.997828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.997859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.998234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.998263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.998632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.998663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.999016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.999044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.590 [2024-10-07 09:52:34.999416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.590 [2024-10-07 09:52:34.999446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.590 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:34.999725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:34.999755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.000135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.000164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.000536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.000566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.000973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.001005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.001396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.001426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.001664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.001695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.002067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.002096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.002465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.002494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.002878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.002909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.003149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.003181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.003530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.003559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.003941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.003972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.004327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.004357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.004740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.004771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.005111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.005140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.005500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.005531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.005917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.005947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.006357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.006388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.006635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.006665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.007051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.007088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.007449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.007480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.007847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.007879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.008123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.008156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.008533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.008563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.008923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.008954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.009339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.009368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.009718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.009750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.010151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.010180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.010459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.010488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.010842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.010873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.011235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.011265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.011429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.011458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.011816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.011847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.591 [2024-10-07 09:52:35.012219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.591 [2024-10-07 09:52:35.012249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.591 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.012646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.012677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.013043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.013074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.013308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.013337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.013726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.013758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.014054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.014083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.014476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.014506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.014682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.014712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.015114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.015143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.015579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.015608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.015953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.015982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.016241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.016270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.016510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.016541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.016734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.016768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.017022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.017054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.017419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.017449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.017829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.017861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.018088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.018117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.018483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.018513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.018855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.018887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.019269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.019299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.019553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.019582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.019985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.020015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.020370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.020400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.020772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.020804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.021175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.021204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.021456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.021494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.021752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.021785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.022036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.022066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.022454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.022484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.022845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.022877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.023239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.023269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.023564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.023593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.024035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.024066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.024428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.024459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.024817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.024848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.025075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.025104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.025485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.025515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.025862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.592 [2024-10-07 09:52:35.025893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.592 qpair failed and we were unable to recover it. 00:31:35.592 [2024-10-07 09:52:35.026280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.026310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.026700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.026731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.027092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.027121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.027531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.027560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.027932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.027962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.028338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.028367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.028722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.028753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.029116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.029145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.029506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.029535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.029950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.029982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.030343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.030372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.030753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.030784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.031162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.031192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.031552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.031580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.031936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.031966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.032229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.032259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.032505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.032533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.032910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.032941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.033326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.033355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.033704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.033735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.034109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.034139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.034516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.034546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.034784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.034815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.035188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.035218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.035572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.035602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.035953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.035983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.036244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.036272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.036494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.036531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.036900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.036931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.037238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.037268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.037641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.037673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.037935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.037965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.038201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.038230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.038638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.038668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.039052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.039082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.039443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.039472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.039716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.039747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.039987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.593 [2024-10-07 09:52:35.040016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.593 qpair failed and we were unable to recover it. 00:31:35.593 [2024-10-07 09:52:35.040403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.040431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.040781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.040813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.041171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.041200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.041578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.041609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.041953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.041987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.042345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.042376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.042741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.042773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.043039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.043071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.043318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.043349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.043755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.043787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.044128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.044160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.044534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.044563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.044896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.044927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.045289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.045319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.045691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.045723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.046029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.046060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.046440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.046470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.046867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.046900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.047260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.047289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.047548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.047576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.047967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.047998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.048360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.048392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.048664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.048700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.048944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.048976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.049324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.049356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.049613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.049653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.049894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.049925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.050181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.050213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.050600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.050643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.051014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.051051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.051428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.051457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.051857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.051888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.052247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.052276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.052665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.052695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.053079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.053109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.053452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.053481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.053862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.053892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.054260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.054290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.594 qpair failed and we were unable to recover it. 00:31:35.594 [2024-10-07 09:52:35.054737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.594 [2024-10-07 09:52:35.054769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.055013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.055042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.055370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.055398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.055668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.055699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.056080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.056109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.056481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.056511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.056851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.056883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.057244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.057273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.057638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.057669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.058022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.058051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.058498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.058528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.058939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.058969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.059231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.059260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.059496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.059526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.059971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.060002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.060415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.060444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.060808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.060837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.061219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.061248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.061469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.061499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.061646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85a0f0 is same with the state(6) to be set 00:31:35.595 [2024-10-07 09:52:35.062333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.062461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Write completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 Read completed with error (sct=0, sc=8) 00:31:35.595 starting I/O failed 00:31:35.595 [2024-10-07 09:52:35.062726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.595 [2024-10-07 09:52:35.063095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.063110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.063443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.063451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.063915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.063972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.595 qpair failed and we were unable to recover it. 00:31:35.595 [2024-10-07 09:52:35.064204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.595 [2024-10-07 09:52:35.064214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.064573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.064582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.064885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.064893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.065087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.065096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.065520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.065530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.065842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.065851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.066198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.066207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.066527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.066536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.066723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.066732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.067080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.067089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.067433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.067442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.067762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.067770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.068118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.068126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.068442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.068450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.068703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.068711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.069025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.069033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.069360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.069368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.069766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.069775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.070105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.070113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.070309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.070317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.070634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.070642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.070986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.070994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.071217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.071224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.071537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.071544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.071675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.071682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.072040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.072049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.072414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.072423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.072734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.072743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.073056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.073077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.073408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.073416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.073748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.073756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.074086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.074093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.074413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.074421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.074744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.074752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.075067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.075074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.075247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.075255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.075577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.075585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.075887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.075895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.596 [2024-10-07 09:52:35.076175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.596 [2024-10-07 09:52:35.076182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.596 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.076522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.076530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.076843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.076852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.077179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.077187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.077514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.077522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.077846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.077854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.078172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.078180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.078394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.078403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.078756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.078764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.079081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.079089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.079458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.079465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.079839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.079849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.080173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.080180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.080494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.080502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.080841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.080849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.081175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.081183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.081525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.081532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.081850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.081858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.082186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.082195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.082516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.082525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.082879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.082887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.083185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.083193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.083515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.083524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.083749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.083757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.084110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.084119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.084437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.084445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.084790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.084798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.085118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.085126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.085452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.085460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.085778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.085786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.086101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.086109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.086433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.086441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.086675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.086683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.087003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.087011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.087335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.087342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.087527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.087535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.087920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.087928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.088249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.088257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.088571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.088580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.597 [2024-10-07 09:52:35.088909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.597 [2024-10-07 09:52:35.088918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.597 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.089236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.089245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.089566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.089575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.089901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.089908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.090227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.090235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.090413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.090423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.090773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.090781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.091183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.091192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.091541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.091549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.091899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.091907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.092211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.092219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.092542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.092550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.092870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.092878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.093194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.093202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.093525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.093533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.093849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.093859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.094179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.094188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.094511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.094520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.094836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.094845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.095161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.095173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.095366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.095375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.095644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.095654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.095975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.095982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.096177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.096186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.096553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.096560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.096760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.096768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.097125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.097133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.097450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.097458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.097776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.097784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.097983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.097990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.098360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.098368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.098590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.098598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.098956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.098964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.099294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.099302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.099627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.099634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.099952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.099960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.100171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.100179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.100522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.100530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.100859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.100869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.598 qpair failed and we were unable to recover it. 00:31:35.598 [2024-10-07 09:52:35.101184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.598 [2024-10-07 09:52:35.101193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.101433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.101442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.101783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.101792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.102141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.102148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.102364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.102372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.102702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.102710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.103048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.103055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.103380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.103387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.103716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.103724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.104036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.104044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.104355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.104363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.104687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.104695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.104975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.104982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.105325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.105333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.105652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.105661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.105969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.105978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.106294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.106303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.106476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.106485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.106884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.106892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.107101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.107109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.107444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.107452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.107862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.107872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.108185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.108193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.108392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.108401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.108773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.108780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.109148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.109165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.109393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.109401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.109722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.109730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.110056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.110063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.110395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.110403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.110729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.110737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.111062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.111070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.111388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.111395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.111691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.111699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.112035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.112044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.112380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.112389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.112709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.112717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.113010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.113018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.113346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.113353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.599 qpair failed and we were unable to recover it. 00:31:35.599 [2024-10-07 09:52:35.113650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.599 [2024-10-07 09:52:35.113659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.113979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.113986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.114303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.114310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.114635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.114644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.114852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.114860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.115059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.115065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.115448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.115455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.115750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.115758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.116089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.116096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.116440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.116450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.116771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.116779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.117091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.117098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.117427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.117434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.117754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.117762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.118094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.118101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.118445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.118454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.118784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.118792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.119111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.119119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.119353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.119361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.119685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.119693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.120013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.120021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.120328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.120337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.120657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.120666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.120995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.121002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.121324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.121332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.121658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.121666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.122075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.122083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.122445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.122452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.122786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.122794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.123140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.123148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.123351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.123359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.123598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.123606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.123962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.123970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.600 qpair failed and we were unable to recover it. 00:31:35.600 [2024-10-07 09:52:35.124272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.600 [2024-10-07 09:52:35.124280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.124606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.124613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.124963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.124972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.125343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.125351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.125534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.125542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.125870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.125878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.126195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.126203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.126524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.126532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.126750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.126759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.127170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.127178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.127482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.127491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.127794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.127802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.128113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.128121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.128437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.128445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.128732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.128740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.129048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.129056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.129283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.129291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.129649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.129661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.130000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.130007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.130313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.130321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.130629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.130636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.130940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.130948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.131274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.131282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.131565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.131574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.131907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.131915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.132237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.132245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.132568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.132576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.132883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.132891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.133216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.133224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.133547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.133555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.133880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.133888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.134209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.134216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.134582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.134589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.134897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.134905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.135216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.135224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.135545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.135554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.135906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.135915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.136233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.136242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.136558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.601 [2024-10-07 09:52:35.136565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.601 qpair failed and we were unable to recover it. 00:31:35.601 [2024-10-07 09:52:35.136874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.136882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.137203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.137210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.137534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.137542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.137751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.137759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.138100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.138107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.138424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.138435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.138748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.138756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.139075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.139082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.139405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.139412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.139733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.139742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.140070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.140079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.140274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.140283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.140626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.140633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.140973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.140981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.141311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.141318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.141643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.141652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.141998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.142005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.142329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.142338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.142666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.142673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.143021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.143029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.143353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.143361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.143700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.143709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.143996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.144003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.144326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.144334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.144656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.144664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.144977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.144984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.145338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.145345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.145648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.145657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.146034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.146043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.146375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.146384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.146719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.146727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.146945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.146952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.147345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.147352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.147686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.147694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.147914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.147922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.148251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.148260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.148586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.148594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.148939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.148947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.149274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.602 [2024-10-07 09:52:35.149281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.602 qpair failed and we were unable to recover it. 00:31:35.602 [2024-10-07 09:52:35.149476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.149485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.149859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.149867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.150217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.150225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.150547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.150556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.150900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.150909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.151226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.151235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.151621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.151630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.151817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.151827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.152049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.152056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.152412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.152419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.152750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.152758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.153091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.153098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.153424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.153432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.153785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.153792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.154077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.154085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.154413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.154421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.154746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.154754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.155090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.155097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.155419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.155427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.155751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.155760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.156090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.156099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.156453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.156462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.156781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.156788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.157105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.157113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.157323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.157332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.157655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.157663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.157994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.158002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.158321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.158329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.158649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.158657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.159051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.159060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.159237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.159246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.159573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.159581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.159910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.159918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.160143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.160150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.160434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.160444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.160758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.160766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.161145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.161152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.161477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.161485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.603 [2024-10-07 09:52:35.161793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.603 [2024-10-07 09:52:35.161801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.603 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.162179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.162186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.162492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.162500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.162796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.162804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.163135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.163143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.163466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.163473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.163796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.163804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.164121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.164129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.164452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.164460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.164770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.164779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.165110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.165119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.165437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.165446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.165860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.165868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.166173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.166181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.166504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.166512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.166844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.166852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.167178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.167185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.167583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.167592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.167925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.167932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.168241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.168249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.168466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.168473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.168795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.168802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.169107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.169115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.169508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.169516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.169840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.169849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.170168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.170175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.170500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.170508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.170881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.170888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.171187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.171195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.171533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.171541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.171589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.171597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.171944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.171953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.172234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.172241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.172454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.172461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.172787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.172796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.604 [2024-10-07 09:52:35.173152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.604 [2024-10-07 09:52:35.173159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.604 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.173378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.173385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.173723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.173737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.174056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.174063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.174385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.174393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.174699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.174706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.175034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.175042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.175364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.175371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.175694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.175701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.175907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.175916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.176234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.176241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.176568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.176576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.176886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.176893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.177126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.177133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.177475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.177482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.177829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.177837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.178159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.178167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.178487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.178495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.178798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.178806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.179011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.179018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.179098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.179105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.179399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.179407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.179733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.179741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.180050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.180058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.180385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.180392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.180569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.180577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.180932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.180941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.181237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.181244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.181475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.181483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.181791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.181801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.182181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.182189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.182511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.182519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.182869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.182877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.183184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.183192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.183397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.183404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.183695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.183703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.184031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.184038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.184348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.184356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.184684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.184691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.605 qpair failed and we were unable to recover it. 00:31:35.605 [2024-10-07 09:52:35.184995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.605 [2024-10-07 09:52:35.185003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.185323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.185330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.185638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.185646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.185851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.185860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.186183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.186192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.186400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.186409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.186736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.186744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.187054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.187062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.187261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.187268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.187629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.187638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.187947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.187954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.188268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.188276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.188593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.188602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.188962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.188970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.189367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.189376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.189685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.189693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.190026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.190034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.190355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.190362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.190683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.190691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.191016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.191023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.191347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.191355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.191682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.191690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.192077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.192085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.192413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.192420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.192750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.192758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.193076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.193084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.193400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.193408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.193755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.193763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.194027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.194034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.194380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.194387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.194701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.194709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.195017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.195027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.195349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.195356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.195664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.195672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.195982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.195990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.196324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.196332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.196647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.196655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.197048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.197055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.197374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.606 [2024-10-07 09:52:35.197381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.606 qpair failed and we were unable to recover it. 00:31:35.606 [2024-10-07 09:52:35.197704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.197713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.197895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.197904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.198246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.198254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.198562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.198569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.198946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.198954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.199272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.199280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.199473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.199483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.199807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.199815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.200130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.200138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.200461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.200469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.200782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.200791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.201123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.201130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.201538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.201547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.201946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.201954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.202265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.202273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.202609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.202620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.203013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.203022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.203339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.203346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.203667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.203675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.203989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.203996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.204312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.204319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.204646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.204655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.204983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.204990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.205315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.205322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.205638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.205646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.205967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.205975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.206294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.206302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.206514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.206522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.206710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.206718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.207066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.207074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.207267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.207275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.207633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.207642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.207899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.207908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.208247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.208255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.208561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.208569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.208870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.208878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.209198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.209206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.209398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.209405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.209738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.607 [2024-10-07 09:52:35.209745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.607 qpair failed and we were unable to recover it. 00:31:35.607 [2024-10-07 09:52:35.210061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.210069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.210388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.210395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.210713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.210721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.210927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.210936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.211257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.211264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.211568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.211576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.211898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.211905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.212313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.212322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.212624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.212632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.212965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.212973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.213295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.213302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.213580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.213587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.213923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.213931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.214138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.214145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.214478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.214485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.214875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.214885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.215101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.215110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.215444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.215451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.215765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.215773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.216111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.216120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.216446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.216456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.216773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.216786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.217097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.217105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.217438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.217446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.217763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.217771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.218091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.218098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.218408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.218415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.218735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.218743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.219066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.219074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.219394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.219401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.219721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.219729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.220042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.220050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.220364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.220372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.220555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.220564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.220906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.220914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.221276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.221285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.221603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.221612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.221907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.221915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.222206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.608 [2024-10-07 09:52:35.222213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.608 qpair failed and we were unable to recover it. 00:31:35.608 [2024-10-07 09:52:35.222529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.222537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.222865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.222873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.223184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.223192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.223506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.223515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.223845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.223853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.224155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.224163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.224482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.224491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.224798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.224807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.225132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.225141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.225348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.225357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.225589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.225598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.225890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.225900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.226218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.226227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.226548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.226557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.226885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.226894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.227201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.227211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.227532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.227541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.227763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.227772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.228099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.228108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.228430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.228439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.228753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.228761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.229098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.229107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.229432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.229441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.229759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.229769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.229970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.229978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.230257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.230264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.230585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.230593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.230921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.230930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.231256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.231264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.231582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.231590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.231875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.231883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.232203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.232210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.232536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.232543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.232856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.609 [2024-10-07 09:52:35.232864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.609 qpair failed and we were unable to recover it. 00:31:35.609 [2024-10-07 09:52:35.233187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.610 [2024-10-07 09:52:35.233195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.610 qpair failed and we were unable to recover it. 00:31:35.610 [2024-10-07 09:52:35.233493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.610 [2024-10-07 09:52:35.233501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.610 qpair failed and we were unable to recover it. 00:31:35.610 [2024-10-07 09:52:35.233804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.610 [2024-10-07 09:52:35.233812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.610 qpair failed and we were unable to recover it. 00:31:35.610 [2024-10-07 09:52:35.234163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.610 [2024-10-07 09:52:35.234171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.610 qpair failed and we were unable to recover it. 00:31:35.610 [2024-10-07 09:52:35.234486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.610 [2024-10-07 09:52:35.234493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.610 qpair failed and we were unable to recover it. 00:31:35.610 [2024-10-07 09:52:35.234801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.610 [2024-10-07 09:52:35.234809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.610 qpair failed and we were unable to recover it. 00:31:35.610 [2024-10-07 09:52:35.235127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.610 [2024-10-07 09:52:35.235136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.610 qpair failed and we were unable to recover it. 00:31:35.610 [2024-10-07 09:52:35.235448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.610 [2024-10-07 09:52:35.235458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.610 qpair failed and we were unable to recover it. 00:31:35.610 [2024-10-07 09:52:35.235775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.610 [2024-10-07 09:52:35.235782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.610 qpair failed and we were unable to recover it. 00:31:35.892 [2024-10-07 09:52:35.236106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.892 [2024-10-07 09:52:35.236116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-10-07 09:52:35.236437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.892 [2024-10-07 09:52:35.236446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-10-07 09:52:35.236717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.892 [2024-10-07 09:52:35.236725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-10-07 09:52:35.237030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.892 [2024-10-07 09:52:35.237040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-10-07 09:52:35.237366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.892 [2024-10-07 09:52:35.237374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-10-07 09:52:35.237688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.892 [2024-10-07 09:52:35.237696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-10-07 09:52:35.238029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.892 [2024-10-07 09:52:35.238036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-10-07 09:52:35.238340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.892 [2024-10-07 09:52:35.238352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-10-07 09:52:35.238674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.892 [2024-10-07 09:52:35.238682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-10-07 09:52:35.239011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.239020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.239213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.239222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.239547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.239554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.239864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.239872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.240209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.240216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.240430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.240438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.240766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.240774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.241090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.241097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.241503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.241512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.241842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.241849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.242181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.242189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.242500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.242509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.242829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.242838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.243171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.243180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.243507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.243516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.243833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.243841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.244160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.244168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.244487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.244494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.244799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.244807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.245133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.245140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.245353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.245361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.245677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.245685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.246009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.246018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.246395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.246403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.246614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.246629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.246994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.247001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.247195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.247203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.247588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.247595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.247923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.247931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.248263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.248270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.248581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.248588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.248914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.248923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.249241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.249248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.249561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.249570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.249886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.249894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.250293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.250302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.250626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.250634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.250950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.250958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.251276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.893 [2024-10-07 09:52:35.251283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.893 qpair failed and we were unable to recover it. 00:31:35.893 [2024-10-07 09:52:35.251476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.251491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.251854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.251862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.252193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.252201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.252417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.252432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.252618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.252627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.252973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.252981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.253312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.253319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.253640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.253647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.253943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.253951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.254278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.254285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.254653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.254661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.254987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.254995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.255218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.255226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.255530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.255537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.255888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.255897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.256227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.256237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.256579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.256588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.256957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.256966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.257185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.257195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.257551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.257561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.257763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.257772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.258132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.258142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.258464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.258474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.258807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.258816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.259168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.259178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.259506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.259516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.259843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.259852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.260176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.260188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.260489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.260498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.260893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.260903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.261232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.261242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.261564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.261573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.261874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.261883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.261963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.261971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.262142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.262150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.262467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.262477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.262799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.262808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.263146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.263156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.263388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.894 [2024-10-07 09:52:35.263397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.894 qpair failed and we were unable to recover it. 00:31:35.894 [2024-10-07 09:52:35.263727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.263737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.264066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.264075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.264408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.264417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.264744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.264753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.265098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.265108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.265423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.265433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.265644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.265654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.266042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.266051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.266259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.266268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.266607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.266621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.266865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.266874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.267076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.267085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.267281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.267290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.267625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.267635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.267923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.267933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.268290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.268300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.268637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.268647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.269019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.269028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.269393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.269403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.269772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.269781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.270108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.270118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.270448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.270458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.270663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.270672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.271015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.271024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.271218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.271227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.271556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.271565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.271879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.271889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.272092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.272101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.272425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.272434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.272623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.272636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.272964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.272971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.273193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.273201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.273433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.273442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.273754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.273763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.274103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.274111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.274311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.274320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.274510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.274520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.274831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.274839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.275158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.895 [2024-10-07 09:52:35.275167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.895 qpair failed and we were unable to recover it. 00:31:35.895 [2024-10-07 09:52:35.275390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.275397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.275690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.275698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.275992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.275999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.276221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.276228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.276550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.276557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.276828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.276835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.277017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.277026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.277359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.277366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.277594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.277601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.277738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.277746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Read completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 Write completed with error (sct=0, sc=8) 00:31:35.896 starting I/O failed 00:31:35.896 [2024-10-07 09:52:35.278490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.896 [2024-10-07 09:52:35.279040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.279160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd68000b90 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.279635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.279674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd68000b90 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.279962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.279970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.280316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.280324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.280556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.280564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.281029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.281036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.281414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.281421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.281667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.281676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.281781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.281788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.282115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.282122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.282336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.282344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.282684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.282693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.282898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.282907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.283251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.283260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.283607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.283633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.283885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.283893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.284223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.284231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.896 qpair failed and we were unable to recover it. 00:31:35.896 [2024-10-07 09:52:35.284574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.896 [2024-10-07 09:52:35.284581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.284886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.284894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.285145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.285154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.285514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.285521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.285849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.285857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.286084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.286092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.286280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.286287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.286579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.286587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.286783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.286792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.287111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.287119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.287434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.287442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.287733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.287741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.288052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.288060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.288263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.288270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.288608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.288621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.288913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.288922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.289254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.289262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.289434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.289442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.289650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.289659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.289765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.289773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.290102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.290110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.290443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.290451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.290703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.290713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.291064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.291072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.291266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.291276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.291607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.291620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.291951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.291958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.292280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.292288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.292387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.292394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.292742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.292751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.292947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.292955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.293292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.293300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.897 qpair failed and we were unable to recover it. 00:31:35.897 [2024-10-07 09:52:35.293649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.897 [2024-10-07 09:52:35.293657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.293958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.293966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.294272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.294280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.294654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.294661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.294900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.294907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.295266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.295274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.295408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.295417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.295702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.295711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.296044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.296052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.296278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.296287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.296502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.296510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.296737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.296744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.297093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.297101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.297316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.297323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.297612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.297624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.297919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.297927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.298266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.298274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.298633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.298642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.298845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.298853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.299197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.299204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.299537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.299544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.299722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.299731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.300067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.300074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.300291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.300299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.300629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.300636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.301019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.301027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.301347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.301355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.301690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.301697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.301895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.301903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.302247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.302255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.302593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.302601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.302940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.302948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.303163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.303171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.303477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.303486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.303712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.303720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.303939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.303947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.304276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.304285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.304621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.304628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.304928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.898 [2024-10-07 09:52:35.304936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.898 qpair failed and we were unable to recover it. 00:31:35.898 [2024-10-07 09:52:35.305144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.305153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.305372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.305381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.305685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.305692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.305757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.305763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.306057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.306065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.306358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.306365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.306733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.306741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.306956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.306964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.307290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.307298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.307544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.307552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.307865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.307873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.308091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.308099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.308460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.308468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.308790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.308798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.309146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.309153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.309460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.309468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.309692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.309699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.310047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.310054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.310256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.310263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.310580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.310589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.310917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.310924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.311132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.311142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.311490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.311497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.311835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.311843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.312203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.312210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.312536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.312543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.312759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.312767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.313111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.313119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.313324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.313333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.313673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.313681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.313979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.313987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.314188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.314196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.314545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.314552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.314766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.314775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.315133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.315142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.315481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.315490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.315790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.315798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.316119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.316128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.316326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.899 [2024-10-07 09:52:35.316334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.899 qpair failed and we were unable to recover it. 00:31:35.899 [2024-10-07 09:52:35.316674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.316683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.316865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.316872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.317206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.317215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.317562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.317572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.317921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.317930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.318178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.318186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.318528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.318535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.318934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.318941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.319000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.319008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.319315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.319323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.319649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.319658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.319956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.319965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.320294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.320303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.320623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.320631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.320943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.320951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.321129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.321137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.321361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.321368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.321718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.321726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.322064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.322072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.322279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.322286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.322648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.322657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.323001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.323008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.323179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.323187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.323467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.323478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.323813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.323821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.324139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.324147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.324556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.324563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.324847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.324856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.325181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.325189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.325559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.325566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.325930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.325939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.326261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.326268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.326584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.326592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.326917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.326925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.327251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.327260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.327578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.327586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.327919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.327928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.328248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.328255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.328575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.900 [2024-10-07 09:52:35.328582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.900 qpair failed and we were unable to recover it. 00:31:35.900 [2024-10-07 09:52:35.328920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.328928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.329105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.329113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.329472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.329479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.329674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.329682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.330027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.330036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.330229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.330238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.330518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.330526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.330855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.330862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.331185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.331193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.331511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.331520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.331841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.331850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.332177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.332188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.332528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.332537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.332792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.332801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.333142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.333151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.333435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.333444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.333663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.333671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.334004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.334011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.334324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.334332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.334653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.334661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.334891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.334899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.335237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.335244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.335560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.335568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.335893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.335902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.336203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.336210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.336514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.336522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.336842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.336850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.337169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.337177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.337496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.337504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.337797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.337805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.338035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.338042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.338385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.338392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.338691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.338699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.338996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.901 [2024-10-07 09:52:35.339003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.901 qpair failed and we were unable to recover it. 00:31:35.901 [2024-10-07 09:52:35.339237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.339245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.339581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.339588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.339908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.339916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.340237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.340245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.340553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.340561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.340903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.340910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.341124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.341132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.341318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.341327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.341657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.341665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.342028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.342035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.342342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.342350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.342672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.342680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.343007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.343014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.343331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.343338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.343641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.343649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.343984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.343991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.344316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.344323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.344650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.344658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.344856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.344866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.345204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.345211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.345425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.345432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.345804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.345812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.346148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.346155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.346474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.346481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.346793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.346801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.347124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.347133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.347454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.347463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.347783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.347792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.348108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.348115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.348425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.348432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.348754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.348762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.349079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.349087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.349408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.349417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.349735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.349745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.350085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.350094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.350412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.350419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.350741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.350749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.351071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.351080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.351401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.902 [2024-10-07 09:52:35.351410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.902 qpair failed and we were unable to recover it. 00:31:35.902 [2024-10-07 09:52:35.351729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.351737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.352056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.352064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.352389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.352396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.352720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.352729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.353050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.353057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.353264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.353271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.353619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.353630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.353802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.353811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.354088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.354095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.354317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.354324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.354654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.354661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.355000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.355007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.355332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.355339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.355666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.355674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.356005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.356012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.356336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.356344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.356667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.356674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.356970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.356978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.357312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.357320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.357626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.357634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.357965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.357972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.358287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.358295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.358660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.358668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.358990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.358998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.359312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.359321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.359639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.359649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.359964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.359972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.360277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.360285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.360612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.360623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.360930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.360938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.361274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.361281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.361605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.361612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.361950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.361957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.362295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.362303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.362638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.362645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.362971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.362978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.363302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.363309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.363630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.903 [2024-10-07 09:52:35.363638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.903 qpair failed and we were unable to recover it. 00:31:35.903 [2024-10-07 09:52:35.363959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.363966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.364365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.364374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.364585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.364593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.364917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.364924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.365311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.365318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.365484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.365491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.365819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.365827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.366142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.366150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.366470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.366477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.366796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.366806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.367126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.367134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.367461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.367468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.367783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.367792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.368125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.368132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.368512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.368519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.368817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.368825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.369153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.369162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.369479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.369488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.369806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.369813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.370137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.370145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.370465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.370472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.370786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.370794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.371126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.371133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.371456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.371464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.371788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.371796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.372115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.372123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.372483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.372490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.372803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.372811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.373140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.373148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.373512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.373521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.373848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.373857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.374177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.374186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.374515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.374524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.374849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.374859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.375186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.375193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.375525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.375533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.375848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.375858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.376181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.376188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.376481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.904 [2024-10-07 09:52:35.376489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.904 qpair failed and we were unable to recover it. 00:31:35.904 [2024-10-07 09:52:35.376720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.376728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.377080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.377087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.377415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.377422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.377737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.377745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.378070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.378077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.378290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.378299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.378632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.378641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.378813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.378821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.379147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.379154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.379480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.379488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.379802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.379811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.380131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.380138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.380489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.380497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.380847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.380855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.381159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.381167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.381492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.381499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.381798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.381806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.382128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.382136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.382463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.382471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.382792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.382799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.383114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.383122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.383492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.383499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.383882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.383891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.384223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.384231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.384573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.384581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.384917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.384925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.385236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.385244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.385435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.385442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.385734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.385742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.386049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.386057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.386268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.386275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.386608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.386628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.386840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.386848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.387182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.387189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.387515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.387523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.387877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.387884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.388209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.388217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.388416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.388424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.388752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.905 [2024-10-07 09:52:35.388762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.905 qpair failed and we were unable to recover it. 00:31:35.905 [2024-10-07 09:52:35.388953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.388961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.389326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.389333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.389664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.389672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.389989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.389996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.390317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.390325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.390651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.390659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.390975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.390983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.391193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.391202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.391548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.391555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.391862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.391870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.392172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.392179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.392491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.392499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.392841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.392849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.393058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.393066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.393432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.393440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.393769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.393778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.394101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.394108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.394429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.394437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.394760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.394767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.395086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.395093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.395378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.395385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.395597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.395605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.395958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.395965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.396267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.396274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.396587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.396596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.396915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.396924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.397246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.397255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.397588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.397597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.397927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.397934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.398251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.398259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.398583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.398591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.398910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.398919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.399242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.399251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.399573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.906 [2024-10-07 09:52:35.399581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.906 qpair failed and we were unable to recover it. 00:31:35.906 [2024-10-07 09:52:35.399911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.399921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.400236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.400245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.400583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.400592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.400711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.400718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.400999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.401008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.401328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.401337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.401667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.401676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.401998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.402005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.402308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.402316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.402639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.402647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.402841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.402848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.403233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.403240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.403468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.403476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.403797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.403805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.404131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.404138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.404455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.404463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.404674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.404682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.405037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.405045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.405362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.405370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.405693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.405701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.406033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.406040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.406259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.406267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.406611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.406622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.406961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.406968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.407212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.407219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.407549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.407556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.407879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.407886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.408215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.408223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.408538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.408545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.408941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.408950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.409273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.409282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.409604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.409612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.409931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.409939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.410261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.410271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.410588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.410597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.410922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.410929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.411255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.411262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.411622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.411631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.411922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.907 [2024-10-07 09:52:35.411929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.907 qpair failed and we were unable to recover it. 00:31:35.907 [2024-10-07 09:52:35.412241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.412249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.412570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.412577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.412910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.412919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.413210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.413218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.413542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.413550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.413840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.413847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.414169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.414177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.414509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.414516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.414842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.414851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.415172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.415180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.415501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.415508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.415712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.415721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.416039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.416046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.416359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.416369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.416689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.416696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.417002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.417010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.417330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.417339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.417631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.417639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.417954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.417961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.418265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.418274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.418597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.418604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.418928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.418937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.419134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.419143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.419475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.419485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.419728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.419739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.420061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.420070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.420470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.420478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.420828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.420836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.421163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.421171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.421500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.421509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.421840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.421849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.422171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.422179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.422502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.422511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.422804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.422813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.423030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.423037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.423384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.423396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.423722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.423731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.423969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.423977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.424270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.908 [2024-10-07 09:52:35.424280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.908 qpair failed and we were unable to recover it. 00:31:35.908 [2024-10-07 09:52:35.424597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.424605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.424822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.424832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.425183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.425193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.425513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.425522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.425695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.425706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.426049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.426057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.426382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.426390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.426711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.426720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.426935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.426945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.427282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.427290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.427615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.427629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.427953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.427961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.428282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.428290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.428587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.428595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.428930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.428939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.429115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.429125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.429458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.429466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.429672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.429682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.430026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.430034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.430416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.430425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.430738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.430747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.431081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.431089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.431410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.431418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.431651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.431663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.432063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.432070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.432369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.432378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.432740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.432748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.433076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.433093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.433432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.433440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.433659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.433667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.434000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.434009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.434337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.434344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.434745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.434755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.435042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.435050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.435371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.435379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.435714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.435722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.436000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.436010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.436219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.436228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.909 [2024-10-07 09:52:35.436546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.909 [2024-10-07 09:52:35.436555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.909 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.436779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.436788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.437172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.437180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.437502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.437512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.437844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.437853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.438170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.438178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.438505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.438513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.438843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.438851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.439180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.439188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.439404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.439413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.439750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.439759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.440008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.440016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.440340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.440349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.440676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.440688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.441027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.441036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.441358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.441367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.441625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.441633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.441984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.441993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.442422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.442430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.442763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.442772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.443097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.443105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.443410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.443418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.443742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.443751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.443951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.443959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.444157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.444166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.444496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.444503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.444810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.444826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.445058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.445066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.445395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.445404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.445600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.445609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.445945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.445954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.446167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.446176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.446495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.446503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.446840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.446849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.447051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.910 [2024-10-07 09:52:35.447059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.910 qpair failed and we were unable to recover it. 00:31:35.910 [2024-10-07 09:52:35.447406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.447414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.447740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.447749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.448057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.448067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.448392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.448401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.448737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.448745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.449083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.449091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.449438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.449445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.449667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.449674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.449974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.449985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.450310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.450317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.450686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.450695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.451040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.451047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.451373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.451381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.451699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.451708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.451943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.451950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.452283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.452290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.452614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.452627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.452936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.452944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.453272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.453283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.453612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.453631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.453956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.453965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.454262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.454270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.454624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.454633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.454981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.454990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.455312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.455322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.455645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.455655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.456071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.456081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.456397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.456407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.456608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.456623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.456877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.456887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.457111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.457120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.457445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.457454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.457791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.457801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.458022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.458031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.458354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.458364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.458667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.458676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.458851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.458861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.459198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.459209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.459525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.911 [2024-10-07 09:52:35.459534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.911 qpair failed and we were unable to recover it. 00:31:35.911 [2024-10-07 09:52:35.459816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.459825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.460151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.460160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.460487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.460496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.460703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.460712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.460879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.460890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.461091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.461101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.461454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.461464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.461800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.461810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.462161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.462171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.462507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.462516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.462843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.462852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.463169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.463180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.463501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.463510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.463805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.463816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.463995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.464006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.464362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.464374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.464699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.464710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.465037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.465047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.465372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.465381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.465690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.465700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.466011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.466022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.466227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.466237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.466452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.466461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.466689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.466699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.467054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.467063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.467400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.467411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.467745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.467755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.467962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.467971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.468194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.468203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.468536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.468545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.468848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.468856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.469186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.469198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.469432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.469441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.469800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.469809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.470130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.470140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.470454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.470462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.470768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.470776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.471095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.471102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.471425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.912 [2024-10-07 09:52:35.471433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.912 qpair failed and we were unable to recover it. 00:31:35.912 [2024-10-07 09:52:35.471755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.471763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.472088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.472098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.472422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.472431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.472785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.472793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.473104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.473112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.473472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.473479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.473645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.473654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.473934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.473944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.474250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.474261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.474581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.474590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.474777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.474786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.475049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.475058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.475229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.475236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.475417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.475425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.475755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.475763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.475991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.476001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.476280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.476288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.476609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.476624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.476936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.476945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.477300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.477310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.477634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.477645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.477947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.477956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.478126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.478134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.478364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.478371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.478688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.478697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.478884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.478892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.479263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.479271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.479692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.479704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.479937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.479945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.480283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.480291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.480484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.480493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.480804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.480813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.481177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.481185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.481544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.481553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.481909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.481917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.482308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.482318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.482642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.482651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.482882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.482895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.483262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.913 [2024-10-07 09:52:35.483272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.913 qpair failed and we were unable to recover it. 00:31:35.913 [2024-10-07 09:52:35.483643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.483653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.484009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.484018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.484336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.484346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.484523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.484532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.484732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.484742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.485107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.485117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.485312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.485321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.485669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.485679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.485994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.486004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.486219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.486229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.486376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.486387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.486710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.486720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.486922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.486931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.487255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.487265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.487591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.487600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.487824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.487835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.488162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.488171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.488494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.488503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.488831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.488843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.489181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.489191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.489554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.489564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.489755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.489768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.489959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.489969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.490176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.490187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.490507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.490518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.490857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.490866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.491197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.491207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.491385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.491394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.491719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.491730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.491787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.491794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.492095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.492104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.492284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.492293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.492648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.492658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.492785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.492794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.493083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.493092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.493415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.493425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.493757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.493766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.493973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.493982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.494190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.914 [2024-10-07 09:52:35.494199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.914 qpair failed and we were unable to recover it. 00:31:35.914 [2024-10-07 09:52:35.494576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.494587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.494991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.495001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.495324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.495335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.495663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.495673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.496003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.496013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.496334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.496343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.496667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.496677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.496883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.496892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.497213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.497223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.497414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.497423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.497756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.497765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.498110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.498121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.498475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.498484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.498687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.498695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.499035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.499043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.499373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.499382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.499703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.499711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.500032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.500041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.500367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.500374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.500437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.500444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.500801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.500811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.501133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.501141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.501451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.501459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.501793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.501802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.502003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.502011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.502243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.502251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.502591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.502601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.502793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.502803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.503135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.503147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.503481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.503489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.503786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.503796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.503987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.503996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.504051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.504058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.504353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.504361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.915 [2024-10-07 09:52:35.504688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.915 [2024-10-07 09:52:35.504697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.915 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.505037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.505046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.505362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.505370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.505691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.505699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.505869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.505877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.506089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.506102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.506482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.506489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.506742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.506752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.507074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.507082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.507419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.507428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.507652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.507667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.507983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.507991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.508310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.508320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.508670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.508680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.509011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.509019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.509348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.509356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.509534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.509543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.509691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.509702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.509901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.509910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.510279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.510288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.510625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.510634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.510947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.510955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.511283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.511291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.511629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.511637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.511846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.511853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.512239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.512250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.512552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.512560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.512866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.512875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.513194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.513202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.513537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.513545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.513724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.513732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.513998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.514007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.514338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.514346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.514672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.514682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.514915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.514924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.515257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.515267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.515496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.515506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.515845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.515854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.516089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.516097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.916 [2024-10-07 09:52:35.516440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.916 [2024-10-07 09:52:35.516448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.916 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.516768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.516776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.517105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.517113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.517446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.517454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.517800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.517809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.518111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.518118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.518412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.518420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.518696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.518705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.518914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.518923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.519224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.519233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.519428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.519437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.519659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.519667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.519888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.519896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.520087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.520095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.520352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.520359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.520700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.520708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.521108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.521116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.521466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.521474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.521805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.521813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.522159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.522167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.522384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.522393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.522605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.522613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.522965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.522974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.523305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.523314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.523496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.523505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.523864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.523873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.524087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.524097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.524432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.524441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.524733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.524742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.525114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.525123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.525455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.525464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.525686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.525693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.526022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.526030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.526338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.526346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.526679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.526690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.526989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.526997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.527289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.527297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.527479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.527488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.527805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.527814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.917 [2024-10-07 09:52:35.528147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.917 [2024-10-07 09:52:35.528154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.917 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.528489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.528497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.528802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.528810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.529141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.529149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.529444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.529452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.529786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.529794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.530103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.530117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.530339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.530347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.530598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.530606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.530944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.530952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.531259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.531267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.531573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.531582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.531816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.531824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.532147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.532156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.532489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.532499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.532797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.532807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.533013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.533022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.533339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.533348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.533669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.533678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:35.918 [2024-10-07 09:52:35.534024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.918 [2024-10-07 09:52:35.534032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:35.918 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.534365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.534377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.534697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.534708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.535117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.535126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.535342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.535352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.535655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.535664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.535998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.536006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.536344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.536360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.536678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.536687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.537096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.537105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.537413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.537423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.537748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.537758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.538122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.538130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.538419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.538430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.538759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.538768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.538972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.538981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.539310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.539319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-10-07 09:52:35.539532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-10-07 09:52:35.539544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.539783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.539792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.540129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.540139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.540490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.540500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.540815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.540824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.541157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.541166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.541484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.541493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.541799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.541809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.542134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.542142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.542541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.542549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.542763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.542771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.543108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.543115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.543290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.543298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.543481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.543490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.543793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.543800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.544119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.544127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.544444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.544452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.544754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.544763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.545091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.545099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.545408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.545416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.545750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.545758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.546087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.546095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.546412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.546419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.546736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.546747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.547073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.547081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.547401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.547409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.547738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.547745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.548069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.548080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.548396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.548405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.548735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.548746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.549124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.549133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.549457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.549466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.549767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.549774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.549968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.549976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.550334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.550341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.550666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.550676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.550982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.550989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.551246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.551254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.551598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.551605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-10-07 09:52:35.551796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-10-07 09:52:35.551805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.552171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.552178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.552575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.552582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.552926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.552934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.553246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.553253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.553608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.553623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.553937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.553946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.554262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.554271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.554629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.554637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.555024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.555034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.555396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.555405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.555698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.555707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.556042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.556049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.556452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.556460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.556781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.556789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.557105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.557113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.557431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.557440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.557692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.557700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.558027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.558035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.558377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.558385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.558712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.558719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.559025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.559033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.559361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.559370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.559695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.559704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.559970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.559977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.560300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.560308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.560628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.560636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.561037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.561045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.561363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.561371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.561691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.561701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.562028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.562037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.562240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.562247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.562568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.562577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.562940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.562948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.563242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.563250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.563554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.563562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.563915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.563923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.564257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.564267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.564608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-10-07 09:52:35.564623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-10-07 09:52:35.564941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.564949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.565264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.565272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.565593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.565601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.565937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.565946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.566288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.566297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.566594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.566602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.566931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.566940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.567252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.567261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.567578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.567587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.567802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.567811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.568038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.568046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.568404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.568413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.568740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.568749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.569065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.569074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.569390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.569399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.569606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.569615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.569909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.569918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.570240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.570255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.570555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.570563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.570926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.570936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.571102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.571112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.571453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.571462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.571769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.571778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.571984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.571993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.572336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.572345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.572669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.572678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.573008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.573016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.573319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.573328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.573680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.573689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.574048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.574057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.574291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.574301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.574653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.574662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.575047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.575057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.575383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.575393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.575718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.575728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.575911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.575920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.576249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.576257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.576634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.576641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-10-07 09:52:35.576991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-10-07 09:52:35.576999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.577321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.577330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.577648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.577659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.578034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.578044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.578337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.578345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.578673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.578681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.579087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.579094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.579430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.579438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.579757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.579765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.580090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.580100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.580426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.580435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.580640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.580650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.580953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.580961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.581282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.581290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.581610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.581627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.581987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.581997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.582340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.582349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.582666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.582675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.582884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.582892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.583105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.583113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.583420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.583430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.583749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.583757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.584086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.584094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.584413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.584423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.584736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.584745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.584965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.584973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.585281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.585288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.585611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.585626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.585977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.585985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.586181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.586190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.586553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.586562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.586879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.586888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.587079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.587088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.587418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.587425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.587755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.587763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.588087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.588095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.588484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.588493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.588813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.588821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.589045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-10-07 09:52:35.589054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-10-07 09:52:35.589395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.589405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.589727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.589736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.589967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.589975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.590285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.590293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.590640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.590649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.590865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.590873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.591235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.591245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.591572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.591580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.591824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.591836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.592156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.592164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.592489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.592497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.592835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.592843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.593173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.593182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.593498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.593509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.593840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.593850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.594174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.594183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.594498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.594507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.594792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.594801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.595012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.595020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.595345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.595356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.595540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.595547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.595895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.595904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.596228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.596236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.596557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.596565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.596941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.596949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.597262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.597270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.597459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.597468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.597845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.597855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.598150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.598159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.598350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.598366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.598688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.598697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.599017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.599025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.599221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-10-07 09:52:35.599230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-10-07 09:52:35.599556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.599564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.599952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.599962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.600278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.600287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.600655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.600663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.601003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.601011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.601204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.601213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.601547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.601554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.601773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.601781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.602132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.602141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.602477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.602486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.602789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.602797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.603135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.603143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.603472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.603480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.603855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.603863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.604183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.604193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.604506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.604515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.604841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.604856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.605230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.605239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.605568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.605576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.605745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.605754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.606171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.606179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.606580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.606589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.606919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.606928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.607138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.607146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.607495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.607503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.607720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.607728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.608059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.608066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.608397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.608405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.608747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.608756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.608957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.608965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.609336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.609343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.609682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.609690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.609936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.609944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.610235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.610243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.610560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.610568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.610924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.610933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.611278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.611286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.611593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.611602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-10-07 09:52:35.611812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-10-07 09:52:35.611822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.612213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.612221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.612522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.612530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.612725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.612736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.613110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.613118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.613440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.613448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.613906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.613913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.614216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.614223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.614585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.614593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.614971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.614980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.615298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.615306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.615642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.615651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.615867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.615875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.616272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.616279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.616580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.616589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.616946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.616954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.617270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.617277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.617483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.617491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.617784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.617792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.618119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.618126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.618455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.618463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.618783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.618791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.619174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.619183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.619506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.619514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.619917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.619926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.620267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.620274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.620595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.620603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.620938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.620946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.621262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.621271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.621611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.621625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.622012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.622021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.622228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.622237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.622401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.622408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.622711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.622719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.623041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.623048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.623350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.623359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.623713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.623721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.624051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.624059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.624383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.624392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-10-07 09:52:35.624694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-10-07 09:52:35.624702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.624969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.624977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.625303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.625310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.625641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.625649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.625967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.625976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.626324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.626333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.626670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.626678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.627038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.627048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.627363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.627370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.627695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.627703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.627995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.628002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.628323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.628331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.628654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.628663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.629014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.629023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.629218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.629228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.629552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.629559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.629869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.629877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.630201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.630209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.630417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.630425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.630766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.630776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.630988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.630996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.631332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.631340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.631658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.631665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.631998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.632005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.632174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.632182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.632656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.632759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd68000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.633193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.633232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd68000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.633586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.633629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd68000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.634047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.634076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd68000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.634453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.634482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd68000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.634884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.634989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd68000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.635350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.635364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.635651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.635659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.636011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.636019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.636352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.636364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.636698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.636706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.636942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.636950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.637281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.637288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.637626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-10-07 09:52:35.637636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-10-07 09:52:35.637816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.637825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.638110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.638117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.638452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.638459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.638679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.638688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.639047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.639055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.639266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.639273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.639563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.639571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.639871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.639879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.640213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.640221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.640437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.640445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.640755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.640763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.641027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.641035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.641355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.641362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.641682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.641691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.642023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.642031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.642237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.642245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.642597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.642604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.642941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.642949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.643267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.643275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.643628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.643636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.643958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.643966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.644290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.644299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.644511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.644518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.644845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.644854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.645064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.645072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.645266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.645274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.645490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.645497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.645904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.645912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.646244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.646252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.646576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.646585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.646802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.646810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.647174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.647182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.647468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.647475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.647706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-10-07 09:52:35.647714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-10-07 09:52:35.648055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.648062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.648395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.648404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.648759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.648770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.649075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.649083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.649412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.649421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.649744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.649751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.650083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.650091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.650334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.650341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.650565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.650574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.650960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.650968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.651316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.651324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.651659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.651667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.651992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.652000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.652335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.652342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.652671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.652679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.653004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.653012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.653341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.653352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.653522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.653532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.653754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.653761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.654101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.654109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.654427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.654434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.654742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.654750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.655063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.655071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.655398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.655408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.655745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.655753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.656074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.656082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.656398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.656406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.656731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.656739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.657067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.657074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.657399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.657411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.657739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.657748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.658089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.658096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.658412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.658420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.658718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.658726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.659101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.659109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.659436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.659447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.659766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.659774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.660094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.660102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.660421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-10-07 09:52:35.660428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-10-07 09:52:35.660744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.660752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.661083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.661090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.661317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.661325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.661513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.661520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.661807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.661815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.662139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.662149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.662473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.662481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.662801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.662809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.663132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.663140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.663459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.663467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.663796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.663805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.664125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.664133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.664451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.664462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.664800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.664808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.665126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.665134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.665450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.665458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.665767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.665776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.666106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.666113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.666434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.666444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.666766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.666775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.666992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.667000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.667320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.667327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.667533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.667541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.667905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.667912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.668242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.668249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.668578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.668586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.668921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.668930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.669252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.669261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.669503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.669512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.669725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.669734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.669945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.669952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.670267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.670277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.670599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.670607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.670969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.670980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.671378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.671386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.671571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.671581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.671859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.671869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.672203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.672210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.672520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-10-07 09:52:35.672528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-10-07 09:52:35.672849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.672857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.673255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.673266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.673639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.673648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.673968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.673977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.674292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.674299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.674606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.674614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.674823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.674831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.675163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.675170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.675354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.675362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.675731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.675740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.676108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.676116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.676461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.676468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.676790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.676798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.677179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.677186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.677595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.677603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.677927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.677934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.678250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.678258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.678585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.678593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.678909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.678917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.679238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.679245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.679569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.679576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.679922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.679931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.680257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.680266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.680653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.680661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.680981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.680989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.681354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.681361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.681657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.681665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.682005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.682014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.682328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.682336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.682657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.682665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.682893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.682901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.683231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.683240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.683588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.683596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.683903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.683911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.684236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.684245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.684605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.684614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.684804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.684812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.685102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.685110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.685447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-10-07 09:52:35.685454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-10-07 09:52:35.685748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.685757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.686096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.686104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.686431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.686441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.686775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.686784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.687110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.687118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.687446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.687453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.687752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.687760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.688083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.688090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.688417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.688425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.688751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.688758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.689076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.689085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.689399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.689406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.689729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.689738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.689941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.689950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.690280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.690287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.690613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.690645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.690975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.690982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.691300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.691310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.691530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.691539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.691862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.691870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.692198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.692205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.692520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.692530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.692817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.692825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.693153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.693161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.693466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.693476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.693797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.693805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.694009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.694017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.694373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.694381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.694652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.694660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.694943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.694951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.695264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.695272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.695596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.695605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.695924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.695933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.696260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.696268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.696581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.696589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.696949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.696958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.697251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.697258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.697562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.697569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-10-07 09:52:35.697919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-10-07 09:52:35.697927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.698222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.698232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.698553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.698561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.698956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.698963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.699285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.699293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.699627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.699636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.699960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.699968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.700291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.700301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.700620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.700629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.700995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.701003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.701312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.701320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.701648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.701657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.701994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.702002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.702322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.702331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.702537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.702547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.702865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.702873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.703204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.703211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.703527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.703534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.703861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.703869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.704094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.704101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.704436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.704445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.704649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.704657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.704982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.704990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.705301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.705309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.705626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.705636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.705781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.705789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.706083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.706091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.706423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.706430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.706738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.706746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.706948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.706955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.707293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.707302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.707629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.707638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.708044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.708053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.708384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.708392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.708722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-10-07 09:52:35.708730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-10-07 09:52:35.709068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.709075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.709387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.709396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.709691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.709700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.710015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.710023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.710213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.710221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.710567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.710575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.710906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.710914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.711223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.711230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.711552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.711560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.711875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.711883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.712251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.712259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.712579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.712586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.712919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.712927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.713236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.713244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.713580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.713590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.713924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.713933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.714246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.714256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.714635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.714644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.715003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.715011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.715204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.715212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.715520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.715528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.715849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.715857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.716165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.716174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.716490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.716498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.716795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.716803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.717124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.717132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.717454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.717462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.717796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.717804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.718130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.718137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.718456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.718465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.718741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.718750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.719082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.719089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.719400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.719408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.719732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.719739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.720061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.720069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.720283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.720300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.720638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.720648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.720955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.720963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.721279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-10-07 09:52:35.721287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-10-07 09:52:35.721480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.721489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.721871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.721879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.722179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.722187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.722502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.722511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.722810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.722819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.723135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.723143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.723553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.723561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.723872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.723880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.724201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.724209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.724505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.724513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.724841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.724849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.725173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.725181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.725506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.725514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.725844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.725852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.726171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.726178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.726505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.726512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.726765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.726773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.727169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.727177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.727489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.727501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.727741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.727749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.728078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.728086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.728385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.728392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.728708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.728716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.729072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.729081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.729439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.729450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.729800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.729810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.730021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.730028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.730352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.730360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.730686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.730694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.731005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.731013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.731336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.731345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.731663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.731672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.732000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.732008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.732334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.732342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.732667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.732675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.733004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.733011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.733328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.733336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.733655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.733663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.733985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-10-07 09:52:35.733995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-10-07 09:52:35.734324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.734334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.734644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.734654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.734951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.734959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.735274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.735282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.735495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.735503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.735815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.735823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.736138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.736148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.736320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.736329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.736677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.736686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.737018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.737026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.737271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.737279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.737622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.737630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.737835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.737844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.738160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.738168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.738487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.738496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.738851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.738859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.739164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.739172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.739506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.739513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.739891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.739901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.740230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.740237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.740563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.740573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.740768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.740777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.741103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.741111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.741414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.741422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.741737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.741745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.742040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.742048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.742364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.742371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.742685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.742693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.743023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.743032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.743353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.743361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.743689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.743697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.744023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.744031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.744338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.744345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.744673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.744681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.744997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.745004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.745321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.745330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.745656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.745664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.745993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.746001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.746170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-10-07 09:52:35.746179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-10-07 09:52:35.746524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.746531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.746853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.746861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.747151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.747159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.747483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.747492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.747798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.747807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.748132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.748139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.748348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.748356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.748708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.748717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.749042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.749052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.749400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.749410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.749760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.749769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.750091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.750099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.750289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.750298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.750512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.750521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.750809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.750818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.751136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.751143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.751466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.751474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.751835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.751843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.752173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.752182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.752410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.752418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.752750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.752759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.753075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.753082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.753403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.753411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.753736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.753744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.754053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.754061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.754383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.754392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.754709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.754717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.755043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.755050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.755367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.755375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.755697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.755705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.756013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.756021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.756392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.756403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.756712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.756721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.756940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-10-07 09:52:35.756948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-10-07 09:52:35.757155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.757163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.757353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.757364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.757708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.757716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.758036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.758044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.758268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.758277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.758498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.758506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.758665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.758674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.758935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.758944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.759311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.759319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.759548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.759555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.759836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.759845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.760026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.760033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.760364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.760372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.760580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.760588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.760909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.760918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.761250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.761258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.761590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.761598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.761915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.761923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.762162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.762169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.762542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.762550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.762873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.762881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.763193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.763203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.763458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.763466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.763790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.763798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.764172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.764180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.764479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.764487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.764721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.764730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.764958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.764967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.765283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.765293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.765655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.765665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.766023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.766030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.766339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.766347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.766447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.766455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.766735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.766743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.767078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.767086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.767410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.767419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.767746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.767755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.768091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.768100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-10-07 09:52:35.768294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-10-07 09:52:35.768302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.768509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.768518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.768893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.768900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.769214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.769222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.769547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.769557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.769851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.769859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.770178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.770188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.770516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.770524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.770849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.770858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.771179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.771187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.771419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.771427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.771656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.771664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.771867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.771875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.772218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.772228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.772568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.772576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.772687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.772695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.773048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.773057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.773272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.773281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.773610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.773624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.773938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.773946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.774280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.774290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.774495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.774504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.774708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.774716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.775033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.775040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.775371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.775380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.775707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.775715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.775947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.775955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.776260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.776268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.776596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.776604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.776937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.776945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.777147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.777155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.777521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.777529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.777859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.777868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.778191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.778198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.778523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.778531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.778861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.778869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.779266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.779280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.779505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.779514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.779834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-10-07 09:52:35.779842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-10-07 09:52:35.780237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.780245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.780568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.780575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.780821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.780829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.781155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.781163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.781524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.781535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.781911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.781919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.782238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.782247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.782569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.782577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.782771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.782780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.783008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.783016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.783199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.783207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.783537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.783546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.783860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.783869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.784061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.784071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.784403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.784412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.784766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.784774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.785103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.785111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.785531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.785541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.785955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.785964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.786142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.786149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.786387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.786396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.786577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.786585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.786922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.786930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.787257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.787265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.787593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.787601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.787943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.787952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.788284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.788291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.788628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.788636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.788830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.788837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.789149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.789157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.789472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.789480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.789832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.789842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.790186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.790195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.790412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.790424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.790681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.790690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.791079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.791088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.791413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.791421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.791514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.791521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.791761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-10-07 09:52:35.791768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-10-07 09:52:35.792116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.792123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.792434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.792444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.792638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.792647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.792835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.792842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.793154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.793162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.793490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.793497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.793793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.793801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.794123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.794132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.794317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.794327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.794658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.794667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.794900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.794909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.795268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.795277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.795603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.795610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.795949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.795958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.796244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.796251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.796585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.796593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.796896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.796904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.797262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.797273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.797643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.797652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.797981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.797990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.798319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.798326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.798655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.798663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.798997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.799005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.799344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.799354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.799699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.799709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.800040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.800049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.800399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.800408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.800596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.800605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.800974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.800984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.801300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.801309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.801640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.801649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.801911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.801920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.802224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.802234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.802320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.802330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.802612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.802628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.802984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.802992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.803318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.803328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.803655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.803665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.803879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-10-07 09:52:35.803887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-10-07 09:52:35.804197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.804205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.804530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.804538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.804865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.804876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.805195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.805204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.805516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.805525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.805853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.805861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.806182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.806191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.806494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.806502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.806814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.806822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.807013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.807021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.807350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.807357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.807693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.807702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.808014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.808021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.808236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.808246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.808654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.808662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.808874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.808882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.808967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.808974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.809348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.809357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.809689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.809697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.809911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.809919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.810237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.810244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.810452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.810461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.810770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.810778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.811115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.811126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.811448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.811456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.811749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.811757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.812081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.812090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.812462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.812471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.812765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.812774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.813115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.813122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.813445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.813453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.813790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.813798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.814112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.814120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.814435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-10-07 09:52:35.814444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-10-07 09:52:35.814771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.814780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.815018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.815029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.815356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.815365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.815554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.815563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.815787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.815794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.816044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.816051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.816377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.816385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.816724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.816732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.817061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.817068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.817477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.817487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.817690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.817699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.818028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.818037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.818223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.818231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.818602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.818610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.818670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.818678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.818763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.818769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.819116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.819125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.819455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.819465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.819825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.819834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.820042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.820050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.820219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.820228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.820525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.820532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.820720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.820728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.820957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.821012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.821251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.821258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.821604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.821614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.821948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.821956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.822366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.822374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.822713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.822721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.823042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.823049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.823478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.823488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.823889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.823897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.824202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.824211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.824410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.824420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.824749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.824758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.825176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.825184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.825482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.825489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.825669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.825677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.825931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-10-07 09:52:35.825939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-10-07 09:52:35.826180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.826189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.826375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.826391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.826748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.826757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.827068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.827076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.827278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.827286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.827640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.827648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.827847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.827856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.828169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.828178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.828391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.828400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.828743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.828751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.829140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.829149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.829463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.829471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.829789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.829797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.830122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.830130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.830352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.830361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.830693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.830703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.830929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.830936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.831279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.831287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.831466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.831476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.831825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.831832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.832156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.832164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.832495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.832504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.832718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.832726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.833051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.833061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.833448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.833456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.833810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.833818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.834140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.834147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.834308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.834316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.834524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.834532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.834874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.834882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.835264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.835274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.835590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.835599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.835970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.835978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.836282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.836290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.836627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.836635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.836933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.836941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.837312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.837322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.837643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.837652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-10-07 09:52:35.837969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-10-07 09:52:35.837977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.838296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.838304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.838515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.838522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.838850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.838858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.839175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.839183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.839490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.839500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.839798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.839806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.840136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.840144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.840471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.840479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.840832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.840840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.841160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.841168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.841491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.841498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.841795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.841803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-10-07 09:52:35.842129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-10-07 09:52:35.842136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.493 [2024-10-07 09:52:35.842470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.493 [2024-10-07 09:52:35.842482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.493 qpair failed and we were unable to recover it. 00:31:36.493 [2024-10-07 09:52:35.842797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.493 [2024-10-07 09:52:35.842805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.493 qpair failed and we were unable to recover it. 00:31:36.493 [2024-10-07 09:52:35.843119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.493 [2024-10-07 09:52:35.843127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.493 qpair failed and we were unable to recover it. 00:31:36.493 [2024-10-07 09:52:35.843448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.493 [2024-10-07 09:52:35.843456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.493 qpair failed and we were unable to recover it. 00:31:36.493 [2024-10-07 09:52:35.843778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.493 [2024-10-07 09:52:35.843786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.493 qpair failed and we were unable to recover it. 00:31:36.493 [2024-10-07 09:52:35.844096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.493 [2024-10-07 09:52:35.844106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.493 qpair failed and we were unable to recover it. 00:31:36.493 [2024-10-07 09:52:35.844435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.493 [2024-10-07 09:52:35.844445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.493 qpair failed and we were unable to recover it. 00:31:36.493 [2024-10-07 09:52:35.844769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.493 [2024-10-07 09:52:35.844782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.493 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.845123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.845132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.845533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.845543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.845772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.845779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.846124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.846133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.846460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.846469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.846838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.846846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.847145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.847162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.847393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.847402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.847625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.847634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.847943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.847951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.848270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.848279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.848600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.848609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.848821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.848829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.849147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.849161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.849484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.849493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.849687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.849696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.849994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.850002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.850333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.850341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.850718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.850726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.851048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.851056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.851376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.851383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.851695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.851704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.852030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.852038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.852356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.852364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.852680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.852687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.852925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.852933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.853270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.853279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.853599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.853607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.853950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.853959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.854270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.854278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.854489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.854497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.854779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.854788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.855105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.855114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.855436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.855445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.855763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.855772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.856097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.856105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.856345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.856352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.494 [2024-10-07 09:52:35.856603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.494 [2024-10-07 09:52:35.856612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.494 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.856831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.856839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.857129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.857136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.857442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.857450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.857789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.857797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.858117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.858127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.858465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.858472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.858687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.858694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.858919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.858926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.859278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.859285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.859505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.859512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.859817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.859824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.860149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.860157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.860484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.860494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.860783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.860792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.861121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.861131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.861502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.861511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.861852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.861862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.862204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.862211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.862535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.862542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.862874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.862883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.863192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.863200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.863558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.863566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.863766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.863774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.864000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.864009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.864359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.864366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.864695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.864705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.865044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.865052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.865386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.865394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.865769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.865776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.866023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.866033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.866388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.866395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.866793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.866803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.867131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.867139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.867456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.867465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.867708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.867717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.867827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.867835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.868170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.868177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.868522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.868531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.495 [2024-10-07 09:52:35.868852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.495 [2024-10-07 09:52:35.868860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.495 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.869177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.869185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.869509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.869518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.869841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.869849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.870204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.870212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.870526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.870535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.870864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.870872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.871177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.871186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.871421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.871429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.871752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.871762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.872083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.872092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.872419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.872427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.872766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.872774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.873083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.873091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.873295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.873303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.873629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.873637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.873930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.873938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.874263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.874272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.874595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.874606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.874977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.874986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.875288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.875296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.875661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.875669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.875993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.876001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.876325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.876333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.876667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.876676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.876996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.877004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.877321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.877329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.877507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.877525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.877846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.877855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.878172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.878180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.878501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.878510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.878803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.878811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.879128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.879136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.879467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.879476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.879793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.879802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.880131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.880140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.880466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.880474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.880698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.880707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.881037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.881046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.496 qpair failed and we were unable to recover it. 00:31:36.496 [2024-10-07 09:52:35.881376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.496 [2024-10-07 09:52:35.881383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.881714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.881722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.882044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.882052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.882362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.882370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.882674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.882683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.882987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.882995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.883315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.883323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.883642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.883650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.883871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.883878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.884197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.884205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.884526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.884534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.884834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.884842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.885146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.885154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.885476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.885485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.885882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.885890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.886182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.886190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.886516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.886526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.886850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.886859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.887185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.887195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.887544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.887553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.887919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.887932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.888255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.888264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.888606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.888621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.888940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.888948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.889267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.889276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.889466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.889475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.889798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.889806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.890123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.890130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.890455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.890463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.890680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.890689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.891014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.891022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.891351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.891359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.891719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.891728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.892046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.892055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.892275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.892282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.892622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.892630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.892949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.892957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.893289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.893297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.497 qpair failed and we were unable to recover it. 00:31:36.497 [2024-10-07 09:52:35.893623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.497 [2024-10-07 09:52:35.893632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.894046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.894053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.894379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.894387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.894699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.894708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.895026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.895033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.895444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.895453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.895754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.895762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.896080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.896090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.896280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.896288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.896683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.896692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.897010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.897017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.897413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.897422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.897804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.897812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.898011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.898019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.898355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.898363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.898687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.898696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.898922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.898932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.899246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.899254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.899563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.899571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.899882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.899889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.900221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.900229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.900451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.900458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.900782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.900790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.901129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.901138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.901471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.901480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.901793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.901801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.902123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.902130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.902456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.902465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.902799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.902807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.903134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.903143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.903459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.903468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.903787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.903795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.904005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.498 [2024-10-07 09:52:35.904013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.498 qpair failed and we were unable to recover it. 00:31:36.498 [2024-10-07 09:52:35.904361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.904369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.904706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.904714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.904884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.904894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.905242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.905251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.905556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.905564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.905881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.905891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.906221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.906230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.906549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.906557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.906895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.906902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.907261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.907312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.907629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.907638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.907989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.907999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.908326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.908333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.908676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.908685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.908976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.908983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.909206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.909213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.909552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.909561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.909882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.909892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.910211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.910220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.910537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.910545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.910940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.910947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.911266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.911274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.911600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.911608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.911966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.911975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.912303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.912311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.912645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.912654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.912910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.912917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.913257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.913265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.913594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.913601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.914013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.914020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.914328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.914336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.914587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.914598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.915004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.915013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.915302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.915310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.915663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.915671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.916004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.916011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.916331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.916339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.916665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.916673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.499 [2024-10-07 09:52:35.917114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.499 [2024-10-07 09:52:35.917124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.499 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.917440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.917447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.917771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.917779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.918098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.918106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.918421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.918429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.918659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.918668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.919040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.919048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.919371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.919381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.919691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.919700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.920018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.920026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.920345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.920353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.920672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.920680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.921010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.921018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.921340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.921348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.921540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.921548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.921851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.921860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.922194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.922201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.922541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.922548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.922869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.922876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.923204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.923212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.923539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.923552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.923864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.923874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.924196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.924205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.924519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.924528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.924848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.924858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.925174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.925183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.925497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.925505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.925828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.925837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.926163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.926173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.926495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.926504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.926835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.926844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.927162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.927171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.927482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.927491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.927805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.927814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.928148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.928158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.928462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.928473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.928715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.928724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.929052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.929060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.929377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.929385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.500 [2024-10-07 09:52:35.929710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.500 [2024-10-07 09:52:35.929717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.500 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.930048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.930056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.930374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.930382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.930695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.930704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.931026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.931035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.931354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.931362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.931682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.931690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.932019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.932027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.932348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.932358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.932761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.932772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.933097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.933104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.933424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.933432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.933751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.933759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.934166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.934175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.934522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.934529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.934855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.934864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.935182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.935191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.935512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.935520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.935838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.935846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.936153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.936161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.936513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.936522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.936847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.936855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.937177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.937184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.937503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.937514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.937853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.937863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.938074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.938083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.938404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.938413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.938754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.938761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.939057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.939065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.939368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.939377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.939698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.939706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.940043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.940050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.940264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.940272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.940605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.940612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.940959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.940966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.941289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.941296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.941607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.941623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.941923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.941932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.942293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.942300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.501 qpair failed and we were unable to recover it. 00:31:36.501 [2024-10-07 09:52:35.942662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.501 [2024-10-07 09:52:35.942670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.942994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.943002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.943323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.943331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.943653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.943661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.943879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.943888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.944205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.944214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.944526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.944535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.944845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.944852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.945175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.945183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.945540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.945547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.945875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.945885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.946202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.946210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.946530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.946540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.946756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.946763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.947114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.947121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.947450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.947458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.947653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.947662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.947979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.947987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.948308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.948316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.948661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.948671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.948993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.949001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.949325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.949332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.949533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.949540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.949737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.949745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.950104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.950112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.950411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.950421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.950631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.950640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.950874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.950882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.951240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.951248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.951560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.951567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.951882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.951890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.952211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.952218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.952534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.952542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.952861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.502 [2024-10-07 09:52:35.952868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.502 qpair failed and we were unable to recover it. 00:31:36.502 [2024-10-07 09:52:35.953193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.953202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.953512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.953519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.953845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.953854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.954181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.954191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.954504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.954512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.954843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.954851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.955169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.955176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.955494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.955504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.955797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.955806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.956134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.956142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.956459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.956467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.956783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.956792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.957111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.957120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.957443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.957454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.957775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.957783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.958171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.958180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.958503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.958510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.958806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.958814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.959012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.959020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.959346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.959354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.959662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.959672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.959862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.959871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.960206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.960214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.960457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.960465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.960751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.960759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.961172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.961180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.961504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.961511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.961710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.961719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.962094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.962104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.962432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.962440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.962761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.962769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.962941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.962950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.963171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.963179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.963407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.963415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.963732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.963740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.964144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.964153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.964470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.964479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.964799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.964807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.965132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.503 [2024-10-07 09:52:35.965139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.503 qpair failed and we were unable to recover it. 00:31:36.503 [2024-10-07 09:52:35.965447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.965455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.965853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.965860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.966184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.966192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.966511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.966521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.966858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.966866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.967183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.967193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.967510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.967519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.967843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.967853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.968212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.968220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.968539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.968550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.968890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.968900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.969231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.969240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.969556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.969565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.969880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.969889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.970197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.970206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.970529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.970538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.970838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.970847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.971178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.971189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.971511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.971521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.971862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.971872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.972083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.972092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.972427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.972436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.972751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.972759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.973166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.973175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.973479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.973488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.973800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.973808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.974133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.974141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.974463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.974471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.974693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.974702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.974884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.974892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.975275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.975284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.975598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.975607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.975927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.975938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.976248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.976256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.976575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.976582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.976908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.976915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.977261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.977268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.977623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.977634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.977915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.504 [2024-10-07 09:52:35.977923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.504 qpair failed and we were unable to recover it. 00:31:36.504 [2024-10-07 09:52:35.978289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.978297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.978654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.978663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.978988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.978995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.979318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.979326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.979642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.979650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.979975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.979982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.980343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.980352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.980676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.980685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.981015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.981022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.981327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.981335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.981667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.981675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.981980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.981987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.982308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.982317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.982635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.982643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.982811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.982819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.983139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.983147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.983452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.983459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.983503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.983512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.983793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.983801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.984128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.984136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.984452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.984461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.984752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.984761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.985084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.985091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.985416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.985424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.985748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.985756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.986073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.986081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.986405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.986415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.986646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.986656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.986954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.986962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.987281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.987289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.987606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.987613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.987939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.987946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.988154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.988162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.988479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.988487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.988839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.988849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.989173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.989182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.989499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.989507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.989719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.989727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.990087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.505 [2024-10-07 09:52:35.990095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.505 qpair failed and we were unable to recover it. 00:31:36.505 [2024-10-07 09:52:35.990270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.990278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.990553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.990560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.990942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.990950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.991260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.991269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.991585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.991593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.992005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.992014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.992331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.992338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.992658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.992666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.993001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.993008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.993329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.993339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.993663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.993671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.993988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.993996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.994315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.994323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.994512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.994520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.994889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.994897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.995214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.995221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.995539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.995549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.995869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.995877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.996097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.996104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.996479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.996487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.996800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.996808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.997132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.997140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.997464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.997471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.997804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.997812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.998202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.998210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.998507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.998516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.998688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.998698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.999059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.999066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.999387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.999395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:35.999718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:35.999726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:36.000041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:36.000049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:36.000375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:36.000384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:36.000704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:36.000713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:36.001035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:36.001043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:36.001363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:36.001371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:36.001562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:36.001570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:36.001743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:36.001751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:36.002045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:36.002052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:36.002271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.506 [2024-10-07 09:52:36.002279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.506 qpair failed and we were unable to recover it. 00:31:36.506 [2024-10-07 09:52:36.002610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.002624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.002920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.002928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.003262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.003270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.003587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.003595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.003918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.003926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.004328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.004337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.004657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.004667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.004982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.004990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.005311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.005318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.005640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.005648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.005845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.005855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.006193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.006200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.006522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.006530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.006752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.006760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.007114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.007122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.007441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.007450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.007780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.007788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.008020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.008028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.008258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.008266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.008622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.008630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.008945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.008953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.009196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.009203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.009534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.009543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.009844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.009851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.010183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.010194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.010565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.010573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.011025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.011034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.011264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.011273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.011628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.011638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.011974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.011982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.012329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.012337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.012575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.012583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.012932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.012940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.507 qpair failed and we were unable to recover it. 00:31:36.507 [2024-10-07 09:52:36.013265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.507 [2024-10-07 09:52:36.013273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.013654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.013665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.013857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.013867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.014057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.014065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.014407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.014415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.014716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.014724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.015035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.015043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.015373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.015381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.015702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.015711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.015925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.015933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.016252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.016260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.016581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.016590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.016837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.016846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.017133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.017140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.017524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.017533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.017846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.017854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.018068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.018076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.018332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.018342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.018691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.018700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.019095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.019103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.019391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.019398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.019626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.019635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.020032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.020039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.020357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.020367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.020697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.020706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.020933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.020941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.021170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.021178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.021465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.021474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.021699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.021706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.022096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.022104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.022315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.022324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.022502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.022511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.022677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.022690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.023017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.023026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.023356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.023364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.023586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.023594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.023977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.023986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.024310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.024319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.024644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.024654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.508 qpair failed and we were unable to recover it. 00:31:36.508 [2024-10-07 09:52:36.024871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.508 [2024-10-07 09:52:36.024880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.025234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.025245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.025458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.025467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.025794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.025805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.026137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.026145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.026334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.026343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.026765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.026775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.027059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.027067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.027410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.027421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.027635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.027643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.027954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.027963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.028168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.028176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.028500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.028509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.028881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.028891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.029207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.029216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.029529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.029540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.029727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.029737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.030057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.030068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.030126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.030133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.030329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.030337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.030543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.030555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.030900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.030908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.031233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.031242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.031569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.031578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.031776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.031785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.032010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.032019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.032372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.032381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.032699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.032707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.033032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.033039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.033437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.033445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.033779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.033788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.033973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.033982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.034155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.034164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.034492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.034500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.034826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.034835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.034917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.034924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.035071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.035080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.035302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.035310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.035639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.509 [2024-10-07 09:52:36.035648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.509 qpair failed and we were unable to recover it. 00:31:36.509 [2024-10-07 09:52:36.035998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.036005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.036323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.036341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.036729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.036739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.037038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.037045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.037334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.037342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.037647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.037655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.037950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.037958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.038283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.038291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.038495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.038504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.038833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.038842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.039163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.039172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.039508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.039516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.039861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.039869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.040187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.040195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.040391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.040398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.040736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.040747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.041080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.041090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.041427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.041436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.041759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.041769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.041963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.041971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.042301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.042311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.042628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.042636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.042943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.042953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.043281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.043290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.043613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.043628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.043789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.043798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.044134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.044142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.044484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.044493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.044815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.044823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.045138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.045146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.045362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.045370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.045702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.045712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.045898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.045907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.046253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.046261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.046453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.046460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.046841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.046849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.047164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.047171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.047517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.047526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.047862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.510 [2024-10-07 09:52:36.047872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.510 qpair failed and we were unable to recover it. 00:31:36.510 [2024-10-07 09:52:36.048230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.048238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.048640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.048649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.048947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.048954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.049272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.049280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.049609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.049625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.049954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.049963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.050296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.050304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.050653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.050661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.051002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.051009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.051319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.051327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.051648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.051658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.051972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.051980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.052379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.052388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.052697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.052706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.052946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.052955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.053282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.053289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.053483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.053491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.053892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.053900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.054245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.054253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.054544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.054553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.054889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.054898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.055220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.055227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.055547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.055555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.055796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.055803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.056019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.056026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.056339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.056347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.056535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.056545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.056882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.056890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.057303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.057312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.057522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.057531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.057859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.057867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.058169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.058178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.058513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.058523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.058946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.058954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.059260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.059268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.511 [2024-10-07 09:52:36.059589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.511 [2024-10-07 09:52:36.059598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.511 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.059919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.059928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.060173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.060181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.060522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.060531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.060848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.060858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.061194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.061204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.061520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.061530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.061840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.061850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.062073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.062083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.062255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.062263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.062611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.062626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.062820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.062828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.063067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.063074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.063368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.063376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.063695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.063704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.064000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.064009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.064382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.064393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.064703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.064712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.065043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.065050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.065368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.065377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.065706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.065716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.065892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.065901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.066297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.066305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.066487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.066495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.066854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.066863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.067187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.067194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.067510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.067518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.067726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.067736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.068070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.068079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.068436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.068445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.068752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.068760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.068958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.068966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.069324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.069332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.069654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.069662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.069982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.069990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.070318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.070327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.070640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.070648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.070985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.070993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.512 [2024-10-07 09:52:36.071310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.512 [2024-10-07 09:52:36.071317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.512 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.071525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.071534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.071856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.071863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.072058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.072066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.072400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.072409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.072729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.072741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.073055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.073065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.073381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.073390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.073721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.073729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.074065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.074072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.074270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.074277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.074474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.074482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.074831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.074840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.075190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.075199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.075439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.075447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.075777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.075785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.076146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.076154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.076493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.076502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.076709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.076717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.077057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.077065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.077378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.077386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.077691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.077699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.077899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.077906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.078281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.078290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.078492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.078501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.078715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.078724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.079056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.079065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.079394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.079404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.079594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.079602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.079937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.079946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.080272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.080280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.080491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.080499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.080815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.080822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.081127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.081135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.081356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.081365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.081568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.081578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.081879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.081887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.082103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.082112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.082441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.513 [2024-10-07 09:52:36.082450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.513 qpair failed and we were unable to recover it. 00:31:36.513 [2024-10-07 09:52:36.082776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.082784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.083099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.083107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.083327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.083334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.083671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.083681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.083902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.083911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.084240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.084249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.084561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.084568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.084894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.084906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.085237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.085245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.085580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.085589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.085910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.085919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.086255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.086263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.086563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.086571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.086896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.086904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.087225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.087233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.087555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.087563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.087879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.087888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.088226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.088235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.088549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.088557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.088911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.088919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.089230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.089238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.089597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.089606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.089817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.089826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.090136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.090143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.090543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.090553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.090901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.090909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.091233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.091241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.091536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.091545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.091897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.091905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.092209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.092217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.092535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.092545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.092774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.092784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.093116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.093125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.093444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.093453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.093738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.093746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.094049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.094058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.094405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.094412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.094711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.094720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.094930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.514 [2024-10-07 09:52:36.094937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.514 qpair failed and we were unable to recover it. 00:31:36.514 [2024-10-07 09:52:36.095278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.095286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.095622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.095630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.095954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.095962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.096289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.096296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.096601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.096609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.096821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.096829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.097145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.097154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.097476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.097486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.097706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.097715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.097768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.097777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.097994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.098002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.098326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.098333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.098661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.098669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.098993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.099001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.099325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.099333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.099653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.099663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.099855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.099864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.100201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.100208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.100420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.100427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.100720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.100727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.101053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.101060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.101378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.101386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.101749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.101758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.102072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.102080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.102383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.102390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.102721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.102729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.103055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.103063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.103379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.103388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.103713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.103723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.104048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.104056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.104368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.104376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.104666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.104674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.105011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.105019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.105216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.105224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.105590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.105597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.105938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.105947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.106270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.106281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.106597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.106605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.515 [2024-10-07 09:52:36.106839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.515 [2024-10-07 09:52:36.106849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.515 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.107061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.107070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.107376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.107385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.107753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.107762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.108069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.108078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.108395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.108403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.108744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.108753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.109069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.109076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.109393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.109401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.109727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.109735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.110053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.110061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.110380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.110390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.110711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.110721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.111042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.111051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.111295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.111304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.111626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.111635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.111837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.111845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.112207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.112216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.112535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.112546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.112887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.112897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.113217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.113227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.113422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.113431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.113826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.113835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.114147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.114156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.114477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.114486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.114797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.114806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.115164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.115175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.115365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.115374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.115700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.115710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.116034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.116044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.116259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.116268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.116595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.116605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.116929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.116938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.117256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.516 [2024-10-07 09:52:36.117265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.516 qpair failed and we were unable to recover it. 00:31:36.516 [2024-10-07 09:52:36.117598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.117609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.117965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.117974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.118175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.118183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.118446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.118455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.118784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.118792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.119109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.119117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.119325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.119334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.119651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.119661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.120062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.120071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.120390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.120398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.120727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.120735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.121058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.121066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.121370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.121378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.121689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.121698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.122032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.122041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.122364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.122371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.122690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.122699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.123021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.123028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.123349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.123357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.123678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.123686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.124004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.124012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.124326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.124335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.124534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.124543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.124873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.124881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.125205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.125213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.125534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.125543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.125954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.125963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.126284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.126293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.126614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.126632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.126987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.126994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.127329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.127337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.127644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.127651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.127981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.127991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.128306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.128313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.128625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.128635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.128990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.128998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.129289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.129296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.129625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.129633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.130016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.517 [2024-10-07 09:52:36.130023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.517 qpair failed and we were unable to recover it. 00:31:36.517 [2024-10-07 09:52:36.130352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.130359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.130710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.130720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.131105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.131113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.131433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.131440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.131762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.131769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.132087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.132095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.132468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.132475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.132789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.132797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.133003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.133011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.133191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.133201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.133501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.133510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.133849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.133857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.134152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.134160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.134484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.134491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.134773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.134781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.135106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.135114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.135422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.135432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.135749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.135758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.136090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.136099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.136413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.136422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.136610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.136626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.136984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.136992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.137324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.137331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.137687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.137697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.138030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.138039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.138436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.138444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.138799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.138807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.139117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.139124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.139288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.139296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.139629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.139639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.139976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.139983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.140208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.140215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.140500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.140508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.140849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.140856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.518 [2024-10-07 09:52:36.141172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.518 [2024-10-07 09:52:36.141182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.518 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.141503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.141514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.141841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.141851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.142168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.142178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.142496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.142505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.142804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.142813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.143139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.143147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.143341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.143349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.143680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.143688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.144030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.144039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.144358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.144367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.144687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.144698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.145020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.145028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.145345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.145352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.145673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.145680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.146011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.146018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.146346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.146353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.146682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.146691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.147011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.147019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.147411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.147420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.147729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.147737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.148067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.148075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.148393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.148400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.148786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.148796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.149115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.149122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.149441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.149450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.149821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.149829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.150147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.150163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.150508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.150515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.150839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.150847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.151070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.151077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.151420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.151428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.151750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.151759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.152063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.152072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.152394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.152403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.152721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.152729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.794 [2024-10-07 09:52:36.153058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.794 [2024-10-07 09:52:36.153066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.794 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.153387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.153395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.153704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.153714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.154035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.154044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.154366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.154374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.154696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.154704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.154930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.154937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.155280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.155287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.155627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.155635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.155954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.155963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.156288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.156297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.156633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.156642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.156950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.156957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.157275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.157283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.157637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.157645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.157986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.157996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.158322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.158330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.158516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.158523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.158846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.158855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.159159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.159166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.159477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.159485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.159672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.159680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.160011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.160018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.160320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.160328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.160646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.160656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.160984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.160993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.161312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.161320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.161642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.161650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.161984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.161991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.162308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.162315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.162637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.162644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.162875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.162885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.163195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.163205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.163547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.163555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.163879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.163886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.164207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.164214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.164454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.164460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.164797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.164804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.165123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.795 [2024-10-07 09:52:36.165131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.795 qpair failed and we were unable to recover it. 00:31:36.795 [2024-10-07 09:52:36.165333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.165340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.165659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.165666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.165726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.165733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.166057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.166063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.166256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.166262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.166591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.166598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.166919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.166925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.167245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.167254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.167469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.167477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.167563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.167570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.167887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.167895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.168182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.168190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.168400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.168407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.168660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.168669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.168995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.169003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.169330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.169338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.169654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.169663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.170003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.170013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.170355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.170364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.170591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.170601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.170906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.170918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.171245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.171254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.171473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.171482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.171758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.171767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.172098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.172109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.172425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.172435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.172800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.172811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.173127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.173137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.173485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.173494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.173791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.173802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.174130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.174139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.174418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.174429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.174754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.174763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.175102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.175111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.175408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.175418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.175737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.175746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.176083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.176092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.176410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.176421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.176754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.796 [2024-10-07 09:52:36.176764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.796 qpair failed and we were unable to recover it. 00:31:36.796 [2024-10-07 09:52:36.177082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.177092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.177412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.177422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.177640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.177649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.178027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.178036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.178353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.178362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.178686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.178696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.178881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.178890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.179229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.179240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.179431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.179440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.179796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.179805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.180129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.180138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.180461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.180470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.180792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.180802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.181107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.181116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.181442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.181453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.181767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.181777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.181954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.181963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.182266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.182275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.182599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.182609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.182930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.182940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.183262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.183272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.183452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.183463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.183788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.183800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.184137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.184147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.184473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.184483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.184690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.184700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.185073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.185083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.185490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.185501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.185817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.185828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.186000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.186009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.186345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.186354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.186697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.186706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.187043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.187052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.187375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.187384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.187692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.187701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.188041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.188051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.188386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.188397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.797 [2024-10-07 09:52:36.188755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.797 [2024-10-07 09:52:36.188763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.797 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.189085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.189093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.189338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.189345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.189680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.189688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.189898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.189906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.190224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.190231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.190565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.190575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.190795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.190804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.191080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.191088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.191412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.191420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.191592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.191601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.191797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.191804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.192057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.192067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.192414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.192422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.192665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.192674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.192987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.192995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.193213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.193231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.193404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.193412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.193689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.193697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.194036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.194044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.194251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.194259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.194606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.194623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.194943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.194953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.195275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.195283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.195602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.195610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.195814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.195824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.196116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.196124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.196399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.196407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.196655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.196663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.197012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.197020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.197357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.798 [2024-10-07 09:52:36.197366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.798 qpair failed and we were unable to recover it. 00:31:36.798 [2024-10-07 09:52:36.197689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.197699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.198117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.198125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.198499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.198507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.198848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.198856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.199180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.199188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.199509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.199519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.199701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.199711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.199997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.200005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.200177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.200186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.200483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.200492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.200819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.200827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.201037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.201044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.201369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.201379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.201704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.201713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.202058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.202065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.202376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.202384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.202703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.202711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.203034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.203041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.203247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.203256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.203587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.203596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.203933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.203941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.204154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.204161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.204483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.204493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.204796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.204804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.205170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.205178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.205554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.205563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.205795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.205803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.206132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.206141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.206461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.206470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.206799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.206808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.207187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.207195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.207436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.207443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.207631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.207638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.207976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.207983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.208323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.208333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.208521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.208530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.208755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.208763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.208978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.208987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.209313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.799 [2024-10-07 09:52:36.209321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.799 qpair failed and we were unable to recover it. 00:31:36.799 [2024-10-07 09:52:36.209631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.209639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.209933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.209941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.210313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.210323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.210516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.210524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.210770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.210778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.211130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.211137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.211478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.211485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.211840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.211848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.212017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.212025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.212367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.212376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.212775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.212784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.213102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.213110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.213400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.213409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.213761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.213769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.214093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.214106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.214418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.214425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.214611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.214624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.214974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.214983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.215316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.215324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.215640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.215650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.215840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.215849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.216186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.216194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.216517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.216524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.216849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.216857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.217183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.217190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.217507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.217516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.217807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.217816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.218144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.218152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.218479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.218486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.218801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.218809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.219131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.219139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.219457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.219466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.219784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.219793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.220114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.220122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.220447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.220454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.220772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.220780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.221113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.221122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.221282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.221290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.221608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.800 [2024-10-07 09:52:36.221627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.800 qpair failed and we were unable to recover it. 00:31:36.800 [2024-10-07 09:52:36.221945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.221954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.222273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.222282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.222603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.222614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.222940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.222948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.223268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.223277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.223592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.223599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.223907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.223915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.224238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.224246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.224574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.224582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.224910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.224918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.225227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.225235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.225550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.225558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.225948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.225961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.226210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.226219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.226602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.226610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.226983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.226993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.227318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.227325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.227642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.227650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.227864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.227872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.228195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.228202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.228509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.228517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.228841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.228849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.229179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.229188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.229507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.229516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.229844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.229853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.230170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.230177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.230494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.230502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.230794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.230802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.231120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.231127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.231461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.231470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.231771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.231779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.232166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.232176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.232502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.232510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.232919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.232928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.233243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.233250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.233575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.233582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.233926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.233933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.234280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.234288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.801 [2024-10-07 09:52:36.234632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.801 [2024-10-07 09:52:36.234640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.801 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.234963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.234971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.235296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.235303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.235621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.235629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.235964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.235972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.236191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.236199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.236538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.236547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.236864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.236873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.237193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.237201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.237521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.237528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.237762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.237770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.238109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.238116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.238439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.238447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.238769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.238777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.239099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.239107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.239425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.239436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.239753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.239761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.240074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.240081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.240453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.240460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.240638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.240647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.240843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.240851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.241186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.241195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.241523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.241532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.241836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.241845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.242065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.242072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.242401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.242409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.242744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.242752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.243055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.243062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.243381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.243388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.243692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.243700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.243895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.243903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.244225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.244234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.244555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.244563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.244891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.244901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3552745 Killed "${NVMF_APP[@]}" "$@" 00:31:36.802 [2024-10-07 09:52:36.245250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.245260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.245576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.245584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 [2024-10-07 09:52:36.245879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.245889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:36.802 [2024-10-07 09:52:36.246209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.802 [2024-10-07 09:52:36.246218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.802 qpair failed and we were unable to recover it. 00:31:36.802 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:36.802 [2024-10-07 09:52:36.246533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.246543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:36.803 [2024-10-07 09:52:36.246830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.246841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:36.803 [2024-10-07 09:52:36.247020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.247029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.803 [2024-10-07 09:52:36.247365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.247374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.247742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.247750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.247939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.247947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.248299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.248308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.248630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.248639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.248959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.248967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.249359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.249368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.249678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.249686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.250009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.250017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.250330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.250339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.250664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.250673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.250912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.250921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.251240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.251251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.251585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.251594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.251927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.251935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.252299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.252358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.252684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.252692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.253019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.253028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.253341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.253349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.253566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.253574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.253912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.253921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.254114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.254122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.254449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.254456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.254814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.254823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.255051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.255058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.255421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.255429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 [2024-10-07 09:52:36.255750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.255760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3553774 00:31:36.803 [2024-10-07 09:52:36.256082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.256092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3553774 00:31:36.803 [2024-10-07 09:52:36.256420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.803 [2024-10-07 09:52:36.256430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.803 qpair failed and we were unable to recover it. 00:31:36.803 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:36.803 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # '[' -z 3553774 ']' 00:31:36.804 [2024-10-07 09:52:36.256763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.256773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.804 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local max_retries=100 00:31:36.804 [2024-10-07 09:52:36.257099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.257109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.804 [2024-10-07 09:52:36.257304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.257314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@843 -- # xtrace_disable 00:31:36.804 [2024-10-07 09:52:36.257645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.257655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 09:52:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.804 [2024-10-07 09:52:36.257874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.257885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.258236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.258246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.258582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.258592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.258930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.258941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.259269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.259280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.259639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.259649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.260028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.260038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.260398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.260408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.260737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.260760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.260992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.261002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.261294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.261307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.261638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.261648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.261950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.261960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.262067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.262075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.262346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.262355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.262678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.262695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.263040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.263050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.263361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.263371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.263572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.263581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.263801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.263811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.264125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.264135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.264538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.264548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.264848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.264858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.265197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.265207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.265436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.265446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.265781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.265793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.266188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.266198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.266515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.266526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.266846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.266858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.267189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.267199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.267430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.267439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.804 [2024-10-07 09:52:36.267635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.804 [2024-10-07 09:52:36.267644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.804 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.267872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.267882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.268239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.268248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.268581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.268590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.268928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.268939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.269261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.269271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.269589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.269598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.269931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.269942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.270132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.270143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.270471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.270481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.270694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.270704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.271014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.271025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.271238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.271248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.271428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.271439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.271654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.271667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.271891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.271899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.272324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.272333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.272677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.272685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.272912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.272920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.273015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.273022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.273335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.273343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.273678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.273686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.273861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.273870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.274099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.274106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.274374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.274382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.274622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.274631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.275031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.275038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.275356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.275363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.275462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.275469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.275771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.275782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.276135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.276142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.276471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.276479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.276561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.276568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.276897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.276908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.277123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.277132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.277479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.277487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.277746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.277756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.278141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.278150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.278366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.278376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.805 [2024-10-07 09:52:36.278615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.805 [2024-10-07 09:52:36.278631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.805 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.278816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.278824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.279003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.279011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.279359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.279366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.279564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.279574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.279669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.279678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.279881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.279891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.280227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.280236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.280427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.280435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.280654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.280664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.280759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.280768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.281077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.281085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.281412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.281420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.281750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.281761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.282125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.282134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.282481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.282489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.282887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.282899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.283113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.283122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.283443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.283452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.283642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.283652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.284044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.284053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.284217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.284226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.284559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.284566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.284927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.284935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.285241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.285251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.285472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.285480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.285666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.285673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.286018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.286026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.286334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.286343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.286671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.286680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.287012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.287020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.287351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.287359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.287684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.287693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.287865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.287875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.288223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.288232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.288546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.288554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.288774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.288784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.289135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.289142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.289479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.806 [2024-10-07 09:52:36.289487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.806 qpair failed and we were unable to recover it. 00:31:36.806 [2024-10-07 09:52:36.289846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.289854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.290039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.290047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.290249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.290258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.290561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.290570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.290789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.290799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.291136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.291144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.291345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.291353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.291690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.291699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.292146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.292155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.292478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.292487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.292711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.292719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.293069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.293076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.293480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.293489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.293707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.293715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.294061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.294069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.294255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.294263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.294576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.294584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.294925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.294933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.295281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.295288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.295638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.295647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.295956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.295964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.296311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.296319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.296648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.296656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.296988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.296996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.297317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.297325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.297685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.297693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.298000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.298008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.298216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.298224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.298413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.298423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.298738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.298746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.299079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.299087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.299413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.299421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.299833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.299842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.300182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.807 [2024-10-07 09:52:36.300191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.807 qpair failed and we were unable to recover it. 00:31:36.807 [2024-10-07 09:52:36.300526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.300534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.300888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.300896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.301227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.301236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.301570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.301579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.301908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.301917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.302242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.302251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.302544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.302553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.302878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.302887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.303222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.303234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.303577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.303586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.303904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.303913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.304242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.304252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.304488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.304497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.304731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.304741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.304923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.304933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.305266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.305274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.305599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.305609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.305942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.305952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.306142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.306151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.306528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.306537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.306849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.306858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.307167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.307177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.307524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.307533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.307766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.307776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.308143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.308152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.308487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.308496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.308676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.308685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.308913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.308922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.309125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.309134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.309459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.309468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.309673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.309681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.310020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.310028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.310216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.310224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.310561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.310569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.310895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.310904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.311103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.311111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.311318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.311325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.311664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.311673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.808 [2024-10-07 09:52:36.312006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.808 [2024-10-07 09:52:36.312013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.808 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.312339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.312347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.312689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.312697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.313027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.313035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.313251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.313258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.313459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.313468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.313831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.313839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.314006] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:36.809 [2024-10-07 09:52:36.314064] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.809 [2024-10-07 09:52:36.314161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.314171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.314505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.314513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.314876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.314883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.315088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.315096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.315445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.315454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.315628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.315636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.315809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.315819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.316101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.316110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.316429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.316438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.316763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.316772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.317143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.317153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.317373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.317382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.317600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.317609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.317933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.317942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.318274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.318284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.318476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.318485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.318828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.318837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.319188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.319198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.319508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.319517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.319860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.319869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.320199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.320209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.320533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.320542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.320842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.320851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.321078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.321088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.321395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.321404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.321745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.321754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.321964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.321974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.322185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.322194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.322530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.322539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.322862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.322871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.323170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.323179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.809 qpair failed and we were unable to recover it. 00:31:36.809 [2024-10-07 09:52:36.323579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.809 [2024-10-07 09:52:36.323588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.323919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.323928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.324146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.324155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.324370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.324379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.324684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.324693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.325021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.325030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.325352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.325361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.325689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.325698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.326046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.326055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.326381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.326391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.326584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.326593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.326941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.326951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.327267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.327276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.327601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.327611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.327953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.327962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.328163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.328173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.328455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.328464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.328659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.328669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.328995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.329004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.329334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.329343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.329669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.329679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.329889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.329898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.330226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.330235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.330553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.330563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.330847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.330856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.331032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.331042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.331456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.331467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.331790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.331799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.332135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.332144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.332432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.332441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.332797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.332806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.333128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.333138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.333457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.333466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.333791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.333800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.334139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.334149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.334333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.334342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.334676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.334686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.335020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.335028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.335351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.335360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.810 qpair failed and we were unable to recover it. 00:31:36.810 [2024-10-07 09:52:36.335684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.810 [2024-10-07 09:52:36.335693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.336015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.336024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.336224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.336232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.336427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.336435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.336657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.336665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.337000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.337009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.337375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.337383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.337710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.337719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.337834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.337842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.338122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.338130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.338450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.338458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.338788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.338796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.339121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.339129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.339339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.339346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.339673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.339681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.339898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.339909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.340150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.340159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.340454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.340463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.340717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.340725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.340943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.340950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.341269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.341277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.341600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.341607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.341917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.341925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.342252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.342259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.342491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.342498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.342807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.342816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.343170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.343178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.343488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.343496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.343792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.343801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.344130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.344138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.344473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.344481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.344709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.344717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.345043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.345050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.345363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.345371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.345579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.345587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.345899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.345907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.346253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.346261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.346483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.346490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.346811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.811 [2024-10-07 09:52:36.346819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.811 qpair failed and we were unable to recover it. 00:31:36.811 [2024-10-07 09:52:36.347065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.347073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.347400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.347407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.347568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.347576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.347920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.347929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.348260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.348267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.348572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.348580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.348872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.348883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.349225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.349233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.349545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.349554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.349875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.349883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.350201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.350209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.350543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.350552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.350883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.350891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.351213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.351221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.351546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.351555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.351889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.351898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.352216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.352227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.352456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.352465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.352681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.352692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.352904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.352913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.353124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.353131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.353371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.353379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.353814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.353822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.354227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.354235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.354495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.354503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.354806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.354815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.355031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.355039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.355419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.355427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.355803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.355812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.356124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.356132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.356434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.356442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.356749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.356757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.356936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.356944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.357289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.357296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.812 qpair failed and we were unable to recover it. 00:31:36.812 [2024-10-07 09:52:36.357631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.812 [2024-10-07 09:52:36.357639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.357975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.357983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.358300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.358308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.358626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.358635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.359047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.359055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.359348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.359356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.359685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.359692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.360021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.360029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.360353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.360362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.360690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.360700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.361048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.361056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.361373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.361381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.361707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.361715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.361919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.361927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.362299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.362306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.362607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.362619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.362963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.362971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.363276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.363284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.363642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.363650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.363976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.363985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.364317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.364325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.364538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.364546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.364851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.364861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.365201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.365211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.365525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.365533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.365783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.365791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.366162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.366170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.366492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.366501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.366771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.366780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.367009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.367018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.367351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.367360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.367683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.367691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.368000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.368009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.368209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.368218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.368537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.368546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.368881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.368889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.369271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.369278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.369599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.369608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.369947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.813 [2024-10-07 09:52:36.369957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.813 qpair failed and we were unable to recover it. 00:31:36.813 [2024-10-07 09:52:36.370277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.370286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.370535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.370544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.370866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.370876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.371043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.371054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.371388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.371397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.371733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.371742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.371933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.371942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.372275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.372283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.372614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.372634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.372931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.372941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.373278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.373286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.373519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.373531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.373841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.373850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.374036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.374044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.374402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.374410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.374770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.374778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.374982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.374991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.375351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.375361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.375682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.375691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.376019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.376028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.376223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.376232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.376576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.376586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.376917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.376927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.377180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.377188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.377514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.377522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.377848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.377858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.378179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.378187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.378505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.378514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.378726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.378735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.379030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.379038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.379394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.379403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.379747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.379757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.380076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.380084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.380436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.380444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.380761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.380770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.381094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.381102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.381437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.381445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.381761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.381771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.381951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.814 [2024-10-07 09:52:36.381960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.814 qpair failed and we were unable to recover it. 00:31:36.814 [2024-10-07 09:52:36.382328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.382336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.382539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.382546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.382831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.382840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.383161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.383169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.383516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.383525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.383844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.383853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.384082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.384091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.384416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.384424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.384770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.384779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.385097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.385105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.385423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.385431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.385753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.385761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.386065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.386074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.386399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.386410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.386686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.386695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.387031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.387039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.387369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.387377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.387637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.387645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.387993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.388002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.388341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.388349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.388671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.388679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.388987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.388996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.389316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.389326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.389647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.389656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.389973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.389982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.390314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.390322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.390534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.390542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.390862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.390871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.391048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.391057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.391302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.391310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.391633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.391642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.391873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.391882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.392097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.392106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.392440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.392448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.392776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.392785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.392982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.392992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.393377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.393385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.393694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.393705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.815 [2024-10-07 09:52:36.394034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.815 [2024-10-07 09:52:36.394043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.815 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.394360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.394369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.394727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.394739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.394929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.394938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.395236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.395245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.395575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.395583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.395802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.395812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.396151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.396160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.396497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.396506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.396740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.396749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.397074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.397083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.397255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.397263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.397622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.397631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.397939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.397950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.398167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.398177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.398503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.398513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.398834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.398843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.399159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.399170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.399490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.399499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.399834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.399843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.400033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.400041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.400241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.400249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.400519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.400528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.400871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.400880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.401198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.401208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.401525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.401534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.401693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.401703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.402053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.402062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.402403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.402412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.402743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.402754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.403091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.403100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.403404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.403413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.403746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.403755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.403963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.403973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.404225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.816 [2024-10-07 09:52:36.404234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.816 qpair failed and we were unable to recover it. 00:31:36.816 [2024-10-07 09:52:36.404554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.404563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.404634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.817 [2024-10-07 09:52:36.404895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.404905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.405227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.405235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.405437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.405446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.405768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.405778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.406125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.406135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.406440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.406450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.406784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.406794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.407135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.407144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.407472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.407481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.407791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.407801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.408011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.408021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.408319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.408328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.408522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.408533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.408900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.408909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.409232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.409244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.409566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.409576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.409900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.409909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.410103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.410112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.410439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.410448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.410744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.410753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.410973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.410984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.411317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.411326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.411647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.411657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.411873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.411884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.412207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.412216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.412538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.412548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.412673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.412683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.412933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.412941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.413146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.413156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.413438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.413447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.413634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.413644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.413984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.413994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.414314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.414323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.414652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.414662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.414860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.414871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.415213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.415223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.415541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.817 [2024-10-07 09:52:36.415550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.817 qpair failed and we were unable to recover it. 00:31:36.817 [2024-10-07 09:52:36.415730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.415739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.416082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.416092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.416414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.416424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.416742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.416753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.417074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.417083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.417404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.417413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.417742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.417752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.418037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.418045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.418406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.418415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.418733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.418741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.419138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.419146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.419474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.419481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.419705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.419713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.420079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.420088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.420413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.420422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.420746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.420755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.421125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.421133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.421446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.421453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.421813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.421822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.422163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.422170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.422360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.422368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.422719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.422727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.423050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.423058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.423433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.423440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.423766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.423776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.424134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.424141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.424437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.424452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.424627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.424636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.424946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.424954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.425272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.425279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.425475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.425482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.425854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.425861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.426197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.426204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.426528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.426536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.426853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.426862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.427201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.427208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.427519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.427527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.427854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.427863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.428185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.818 [2024-10-07 09:52:36.428192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.818 qpair failed and we were unable to recover it. 00:31:36.818 [2024-10-07 09:52:36.428377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.428384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.428753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.428762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.429080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.429087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.429279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.429287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.429647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.429656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.429997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.430006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.430326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.430333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.430660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.430669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.431062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.431071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.431375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.431383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.431674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.431682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.431978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.431986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.432316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.432323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.432730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.432739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.432957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.432964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.433310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.433319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.433645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.433652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.433983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.433991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.434311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.434318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.434629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.434637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.434967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.434975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.435293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.435300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.435632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.435640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.435946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.435954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.436284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.436292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.436690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.436700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.437020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.437029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.437347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.437355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.437680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.437688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.437912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.437920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.438198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.438206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.438576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.438584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.438910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.438918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.439129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.439138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.439465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.439472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.439793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.439800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.440127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.440134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.440524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.440534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.819 [2024-10-07 09:52:36.440826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.819 [2024-10-07 09:52:36.440834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.819 qpair failed and we were unable to recover it. 00:31:36.820 [2024-10-07 09:52:36.441162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.820 [2024-10-07 09:52:36.441170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:36.820 qpair failed and we were unable to recover it. 00:31:37.096 [2024-10-07 09:52:36.441494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.441503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.441801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.441812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.442144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.442152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.442533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.442542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.442734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.442743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.443109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.443116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.443331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.443339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.443724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.443733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.444047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.444054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.444359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.444367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.444693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.444701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.445027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.445036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.445406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.445413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.445737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.445748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.446053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.446061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.446376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.446384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.446711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.446719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.447044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.447052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.447294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.447301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.447599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.447607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.447954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.447962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.448278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.448286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.448604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.448611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.448929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.448937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.449258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.449267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.449582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.449592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.449921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.449931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.450249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.450259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.450582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.450591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.450981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.450990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.451323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.451332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.451666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.451674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.452052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.452061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.452382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.452391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.452586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.452596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.452946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.452955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.453290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.453299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.097 [2024-10-07 09:52:36.453621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.097 [2024-10-07 09:52:36.453631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.097 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.453961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.453970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.454289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.454299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.454492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.454501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.454738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.454747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.455106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.455115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.455446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.455455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.455768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.455778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.456082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.456092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.456318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.456327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.456665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.456674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.456983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.456993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.457318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.457327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.457678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.457688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.457930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.457941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.458312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.458323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.458647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.458656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.459047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.459059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.459382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.459390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.459633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.459642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.459975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.459983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.460214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.460223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.460572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.460581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.460920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.460930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.461285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.461296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.461483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.461492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.461826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.461836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.462168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.462176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.462496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.462504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.462737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.462746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.463066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.463075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.463402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.463412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.463737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.463745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.464067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.464074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.464381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.464390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.464738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.464746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.464911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.464918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.465305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.465312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.465655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.465663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.098 qpair failed and we were unable to recover it. 00:31:37.098 [2024-10-07 09:52:36.465988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.098 [2024-10-07 09:52:36.465996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.466317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.466325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.466571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.466578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.466891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.466899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.467111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.467120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.467447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.467454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.467645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.467654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.467963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.467971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.468181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.468190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.468528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.468535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.468842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.468851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.469108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.469116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.469465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.469473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.469668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.469678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.469995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.470008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.470304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.470312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.470521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.470529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.470868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.470876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.471203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.471211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.471535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.471544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.471851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.471860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.472208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.472215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.472540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.472548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.472901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.472908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.473230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.473238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.473432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.473439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.473739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.473747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.474076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.474085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.474432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.474440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.474761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.474770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.475104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.475112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.475431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.475440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.475665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.475674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.475876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.475883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.476184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.476191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.476537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.476544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.476753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.476764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.477089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.477099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.477510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.099 [2024-10-07 09:52:36.477519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.099 qpair failed and we were unable to recover it. 00:31:37.099 [2024-10-07 09:52:36.477916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.477925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.478246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.478256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.478558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.478565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.478773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.478781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.479118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.479126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.479454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.479461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.479786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.479795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.480140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.480151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.480469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.480478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.480675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.480684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.480974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.480982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.481336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.481345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.481673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.481683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.481980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.481988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.482316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.482325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.482640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.482649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.482895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.482903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.483251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.483261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.483468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.483476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.483755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.483763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.484095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.484102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.484433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.484441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.484794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.484804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.485139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.485147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.485480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.485489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.485834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.485845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.486021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.486029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.486296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.486304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.486697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.486707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.487051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.487059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.487368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.487377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.487701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.487710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.488008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.488015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.488343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.488350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.488668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.488676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.488970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.488978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.489312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.489319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.489717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.489732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.490076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.100 [2024-10-07 09:52:36.490085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.100 qpair failed and we were unable to recover it. 00:31:37.100 [2024-10-07 09:52:36.490416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.490425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.490748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.490755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.490968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.490977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.491314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.491322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.491628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.491637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.491953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.491960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.492290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.492299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.492622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.492630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.492950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.492958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.493272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.493282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.493603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.493610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.493968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.493975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.494297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.494305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.494629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.494637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.494988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.494997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.495323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.495331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.495652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.495660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.495868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.495877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.496102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.496111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.496458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.496466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.496784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.496792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.497132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.497140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.497367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.497376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.497716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.497724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.498074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.498082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.498405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.498413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.498589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.498598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.498989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.498996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.499158] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.101 [2024-10-07 09:52:36.499204] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.101 [2024-10-07 09:52:36.499213] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.101 [2024-10-07 09:52:36.499220] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.101 [2024-10-07 09:52:36.499226] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.101 [2024-10-07 09:52:36.499302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.499311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.499639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.101 [2024-10-07 09:52:36.499648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-10-07 09:52:36.499992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.499999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.500229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.500236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.500430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.500438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.500768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.500776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.501181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.501191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.501279] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:31:37.102 [2024-10-07 09:52:36.501413] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:31:37.102 [2024-10-07 09:52:36.501550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.501559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.501595] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:31:37.102 [2024-10-07 09:52:36.501595] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:31:37.102 [2024-10-07 09:52:36.501872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.501881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.502221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.502229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.502423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.502440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.502675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.502683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.503017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.503026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.503348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.503355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.503562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.503569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.503843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.503851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.504056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.504063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.504400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.504409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.504743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.504751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.504953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.504961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.505345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.505353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.505683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.505691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.506017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.506024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.506343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.506351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.506675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.506683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.507012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.507020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.507275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.507283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.507569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.507577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.507910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.507918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.508252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.508260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.508485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.508494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.508604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.508612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.508979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.508989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.509368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.509378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.509704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.509712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.509956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.509965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.510160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.510168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.510353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.510363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.510592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.510602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-10-07 09:52:36.510926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.102 [2024-10-07 09:52:36.510934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.511225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.511233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.511434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.511442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.511671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.511680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.511861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.511869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.512200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.512209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.512557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.512564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.512851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.512859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.513102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.513110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.513237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.513244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.513349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.513356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.513475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.513483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.513821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.513829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.514159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.514167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.514581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.514590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.514756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.514763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.515101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.515111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.515445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.515453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.515678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.515687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.516078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.516085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.516296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.516304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.516577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.516586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.516790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.516800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.517191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.517201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.517526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.517538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.517769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.517777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.518089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.518097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.518274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.518283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.518647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.518657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.518995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.519004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.519322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.519331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.519679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.519688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.519929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.519938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.520271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.520280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.520493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.520504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.520820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.520828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.521133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.521141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.521323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.521332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.521663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.521673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.103 [2024-10-07 09:52:36.522026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.103 [2024-10-07 09:52:36.522036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.522097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.522105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.522436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.522445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.522731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.522739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.523051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.523059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.523126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.523133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.523470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.523479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.523794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.523802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.524147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.524156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.524590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.524598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.524677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.524685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.525015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.525024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.525214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.525222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.525446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.525455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.525629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.525638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.526041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.526049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.526378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.526387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.526741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.526750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.527086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.527095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.527420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.527428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.527614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.527635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.527986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.527994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.528328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.528342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.528670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.528678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.529065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.529073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.529358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.529366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.529577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.529585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.529773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.529781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.529957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.529965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.530201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.530209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.530407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.530416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.530613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.530633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.530862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.530870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.531094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.531103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.531296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.531304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.531504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.531513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.531572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.531579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.531766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.531775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.104 qpair failed and we were unable to recover it. 00:31:37.104 [2024-10-07 09:52:36.531966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.104 [2024-10-07 09:52:36.531975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.532176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.532185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.532484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.532493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.532819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.532828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.533017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.533025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.533364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.533373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.533792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.533802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.534135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.534143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.534477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.534484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.534748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.534757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.534954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.534964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.535326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.535335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.535665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.535673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.535923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.535932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.536278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.536287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.536484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.536491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.536791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.536799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.536992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.537003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.537203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.537212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.537493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.537501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.537705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.537714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.538048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.538057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.538463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.538471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.538658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.538666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.539020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.539029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.539341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.539353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.539676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.539685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.539991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.539999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.540335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.540343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.540572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.540580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.540884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.540892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.541054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.541060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.541352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.541360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.541693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.541702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.541888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.541896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.542275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.542283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.542484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.542493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.542828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.542838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.543163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.543171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.105 qpair failed and we were unable to recover it. 00:31:37.105 [2024-10-07 09:52:36.543486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.105 [2024-10-07 09:52:36.543495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.543856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.543864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.544056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.544064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.544253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.544261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.544613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.544628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.544915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.544924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.545136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.545145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.545466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.545474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.545788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.545797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.546134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.546141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.546336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.546344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.546723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.546730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.547077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.547085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.547424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.547435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.547775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.547785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.548138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.548147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.548463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.548473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.548746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.548753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.549081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.549089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.549427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.549435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.549809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.549817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.550144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.550152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.550342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.550350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.550669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.550678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.550872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.550882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.551205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.551213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.551488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.551495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.551757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.551766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.552108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.552116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.552448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.552456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.552734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.552742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.552965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.552974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.553207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.553215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.553545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.553553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.106 qpair failed and we were unable to recover it. 00:31:37.106 [2024-10-07 09:52:36.553770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.106 [2024-10-07 09:52:36.553779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.554118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.554125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.554450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.554459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.554660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.554676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.554901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.554909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.555217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.555225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.555417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.555424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.555740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.555748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.555891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.555898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.556213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.556222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.556569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.556578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.556885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.556893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.557245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.557254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.557643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.557653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.557856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.557865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.558078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.558086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.558325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.558333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.558702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.558710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.558911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.558918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.559361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.559369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.559721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.559733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.560100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.560108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.560424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.560432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.560660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.560668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.561023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.561032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.561387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.561395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.561603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.561611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.562000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.562009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.562349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.562357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.562576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.562584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.562984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.562993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.563323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.563331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.563542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.563551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.563923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.563932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.564249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.564257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.564589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.564596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.564904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.564913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.565232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.565240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.565552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.107 [2024-10-07 09:52:36.565560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.107 qpair failed and we were unable to recover it. 00:31:37.107 [2024-10-07 09:52:36.565888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.565897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.566146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.566154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.566348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.566356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.566532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.566541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.566846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.566856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.567073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.567081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.567376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.567385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.567628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.567638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.567956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.567966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.568208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.568215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.568603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.568611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.568664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.568670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.568814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.568823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.569123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.569130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.569494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.569502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.569687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.569695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.570123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.570131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.570309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.570316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.570727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.570735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.571096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.571105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.571352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.571360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.571670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.571678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.572006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.572014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.572382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.572390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.572802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.572811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.573130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.573137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.573447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.573455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.573788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.573796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.574119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.574127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.574461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.574469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.574810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.574819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.575180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.575187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.575366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.575373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.575686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.575693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.575888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.575895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.576095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.576103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.576415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.576424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.576613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.576631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.576974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.108 [2024-10-07 09:52:36.576982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.108 qpair failed and we were unable to recover it. 00:31:37.108 [2024-10-07 09:52:36.577300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.577308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.577643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.577651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.577972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.577981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.578330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.578337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.578677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.578685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.579020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.579027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.579363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.579371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.579696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.579704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.579997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.580005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.580348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.580356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.580671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.580681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.581022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.581030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.581204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.581212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.581578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.581587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.581918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.581926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.582252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.582260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.582411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.582418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.582626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.582634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.582808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.582815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.583171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.583179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.583393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.583400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.583684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.583692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.584030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.584038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.584425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.584432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.584645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.584653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.584974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.584983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.585170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.585179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.585485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.585494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.586149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.586157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.586477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.586485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.586819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.586827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.587006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.587013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.587422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.587429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.587640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.587647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.587978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.587986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.588344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.588352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.588680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.588688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.588878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.588886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.589111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.109 [2024-10-07 09:52:36.589119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.109 qpair failed and we were unable to recover it. 00:31:37.109 [2024-10-07 09:52:36.589289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.589297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.589495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.589502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.589666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.589675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.589993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.590000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.590327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.590335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.590518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.590527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.590866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.590874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.591248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.591256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.591546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.591554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.591607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.591614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.591914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.591922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.592248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.592256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.592592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.592600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.592934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.592942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.593269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.593276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.593634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.593642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.593982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.593990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.594308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.594315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.594511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.594520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.594866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.594874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.595195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.595202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.595537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.595545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.595769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.595779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.596224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.596233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.596614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.596628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.596955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.596962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.597281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.597290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.597612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.597629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.597952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.597961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.598140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.598149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.598350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.598358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.598677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.598685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.598868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.598876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.110 [2024-10-07 09:52:36.599078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.110 [2024-10-07 09:52:36.599085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.110 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.599399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.599407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.599597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.599605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.599918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.599926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.600129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.600136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.600362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.600370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.600572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.600581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.600946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.600954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.601007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.601015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.601316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.601325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.601533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.601541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.601824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.601832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.602046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.602053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.602259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.602267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.602407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.602414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.602787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.602795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.603109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.603117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.603466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.603473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.603805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.603813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.604160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.604169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.604498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.604507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.604693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.604702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.605020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.605028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.605223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.605231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.605469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.605477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.605688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.605696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.605859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.605866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.606193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.606201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.606404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.606412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.606604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.606613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.606965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.606973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.607307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.607315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.607537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.607546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.607887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.607896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.608052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.608061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.608290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.608299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.608525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.608534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.608830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.608839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.609164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.609173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.111 [2024-10-07 09:52:36.609547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.111 [2024-10-07 09:52:36.609557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.111 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.609881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.609890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.610108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.610117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.610316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.610325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.610517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.610527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.610716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.610724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.611123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.611132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.611458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.611467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.611876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.611884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.612084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.612092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.612434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.612441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.612630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.612638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.612956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.612963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.613138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.613145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.613432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.613439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.613766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.613773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.614077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.614085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.614298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.614306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.614479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.614486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.614713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.614721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.615073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.615080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.615297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.615304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.615632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.615639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.615932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.615939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.616267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.616276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.616482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.616491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.616542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.616549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.616642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.616649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.617007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.617016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.617338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.617346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.617537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.617545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.617786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.617793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.618084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.618091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.618555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.618563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.618889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.618896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.619090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.619101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.619190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.619196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.619517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.619526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.619742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.619749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.112 [2024-10-07 09:52:36.620065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.112 [2024-10-07 09:52:36.620072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.112 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.620401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.620409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.620751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.620758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.621075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.621083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.621405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.621412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.621571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.621579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.621894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.621902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.622100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.622107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.622403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.622411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.622742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.622750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.623084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.623091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.623436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.623443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.623782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.623791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.624118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.624125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.624347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.624355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.624691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.624698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.624920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.624927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.625128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.625135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.625464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.625472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.625789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.625797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.626136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.626143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.626289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.626296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.626479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.626486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.626662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.626670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.626718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.626725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.626941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.626949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.627290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.627298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.627624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.627633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.627789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.627797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.627974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.627980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.628275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.628283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.628626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.628633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.628967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.628975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.629304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.629311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.629648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.629656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.629999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.630007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.630220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.630228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.630503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.630513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.630833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.630840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.113 qpair failed and we were unable to recover it. 00:31:37.113 [2024-10-07 09:52:36.630984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.113 [2024-10-07 09:52:36.630991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.631283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.631292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.631621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.631630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.631938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.631945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.632244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.632252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.632583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.632590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.632984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.632992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.633155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.633161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.633457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.633464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.633807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.633814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.634143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.634150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.634482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.634489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.634787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.634795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.635138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.635145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.635316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.635323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.635692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.635700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.635881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.635888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.636132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.636140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.636553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.636561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.636747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.636756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.637069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.637076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.637410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.637417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.637628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.637635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.637862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.637869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.638032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.638040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.638179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.638188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.638457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.638464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.638768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.638775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.639108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.639114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.639287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.639294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.639570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.639578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.639770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.639777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.640153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.640160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.640491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.640499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.640829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.640836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.641036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.641043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.641212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.641220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.641481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.641489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.641829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.641836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.114 qpair failed and we were unable to recover it. 00:31:37.114 [2024-10-07 09:52:36.642133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.114 [2024-10-07 09:52:36.642141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.642490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.642497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.642827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.642835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.643243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.643250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.643464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.643471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.643780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.643788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.643997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.644004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.644187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.644194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.644393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.644401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.644611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.644623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.644787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.644794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.644972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.644980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.645303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.645310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.645682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.645689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.645862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.645870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.646107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.646115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.646334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.646342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.646631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.646638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.646827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.646835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.647154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.647161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.647468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.647475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.647682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.647689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.647917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.647925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.648309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.648316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.648490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.648497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.648830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.648838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.649088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.649095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.649418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.649428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.649648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.649657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.649951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.649958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.650136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.650144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.650508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.650516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.650842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.650849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.651042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.115 [2024-10-07 09:52:36.651049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.115 qpair failed and we were unable to recover it. 00:31:37.115 [2024-10-07 09:52:36.651503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.651511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.651829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.651836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.651970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.651977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.652326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.652335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.652671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.652678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.652869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.652875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.653175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.653183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.653507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.653514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.653685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.653693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.654067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.654074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.654478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.654486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.654654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.654663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.654965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.654972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.655294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.655301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.655734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.655741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.655946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.655954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.656117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.656125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.656454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.656461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.656758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.656765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.656961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.656968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.657295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.657304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.657432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.657439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.657774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.657782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.658093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.658101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.658481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.658488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.658690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.658697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.659014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.659021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.659218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.659226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.659597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.659605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.659936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.659945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.660254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.660262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.660604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.660612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.660808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.660815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.661091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.661098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.661430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.661439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.661481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.661488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.661773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.116 [2024-10-07 09:52:36.661781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-10-07 09:52:36.662173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.662181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.662506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.662514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.662852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.662861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.662902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.662909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.663215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.663224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.663414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.663422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.663757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.663765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.663804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.663810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.664116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.664124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.664446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.664453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.664498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.664505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.664669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.664677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.665065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.665073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.665438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.665445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.665769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.665777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.665946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.665954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.666277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.666284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.666627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.666635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.666969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.666976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.667185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.667192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.667326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.667333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.667630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.667638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.667996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.668003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.668326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.668333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.668527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.668536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.668912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.668919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.669244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.669251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.669442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.669449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.669743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.669751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.670083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.670090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.670504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.670511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.670558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.670564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.670859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.670867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.671089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.671097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.671172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.671179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.671495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.671502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.671547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.671553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.671734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.117 [2024-10-07 09:52:36.671741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-10-07 09:52:36.672104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.672113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.672435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.672443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.672753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.672760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.672954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.672961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.673417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.673424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.673729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.673737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.673777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.673784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.674114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.674122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.674351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.674358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.674554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.674562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.674895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.674903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.675201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.675209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.675531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.675538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.675952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.675964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.676009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.676015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.676373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.676381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.676718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.676726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.677041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.677050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.677382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.677391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.677755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.677763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.677970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.677980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.678267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.678275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.678598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.678606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.678771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.678779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.678820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.678826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.679051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.679058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.679346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.679354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.679679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.679687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.680003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.680012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.680059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.680066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.680442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.680449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.680637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.680646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.680935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.680944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.681063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.681070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.681429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.681436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.681749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.681757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.682087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.682095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.682260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.682268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.682441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.118 [2024-10-07 09:52:36.682448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-10-07 09:52:36.682714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.682721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.683040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.683048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.683449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.683456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.683753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.683760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.684099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.684107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.684436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.684445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.684646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.684656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.684980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.684989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.685210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.685217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.685548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.685555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.685766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.685773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.686088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.686095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.686440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.686448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.686773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.686781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.687128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.687135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.687368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.687378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.687698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.687706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.688028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.688035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.688374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.688381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.688547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.688554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.689053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.689060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.689359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.689366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.689652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.689660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.690027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.690035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.690209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.690218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.690523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.690530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.690827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.690835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.691021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.691029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.691223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.691230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.691535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.691542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.691952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.691960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.692302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.692310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.692636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.692645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.693018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.119 [2024-10-07 09:52:36.693026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.119 qpair failed and we were unable to recover it. 00:31:37.119 [2024-10-07 09:52:36.693237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.693244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.693316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.693323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.693507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.693515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.693806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.693822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.694141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.694148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.694474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.694482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.694854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.694862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.695170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.695178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.695505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.695514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.695843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.695851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.696162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.696170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.696499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.696507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.696702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.696711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.697061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.697069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.697239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.697248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.697554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.697562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.697883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.697892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.698061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.698069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.698266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.698275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.698611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.698624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.698944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.698952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.699127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.699134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.699361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.699369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.699595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.699602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.699788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.699796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.700136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.700143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.700471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.700479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.700795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.700802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.701123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.701131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.701339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.701346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.701536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.701544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.701834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.701842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.702208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.702215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.702512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.702520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.702724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.702732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.703081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.703089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.703454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.703461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.703784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.703792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.704111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.120 [2024-10-07 09:52:36.704118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.120 qpair failed and we were unable to recover it. 00:31:37.120 [2024-10-07 09:52:36.704417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.704424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.704645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.704653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.705026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.705035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.705360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.705368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.705562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.705571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.705870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.705877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.706199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.706206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.706525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.706534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.706707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.706716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.707100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.707108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.707345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.707355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.707530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.707539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.707862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.707871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.708204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.708213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.708383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.708392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.708538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.708547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.708883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.708891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.709211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.709220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.709551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.709559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.709752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.709761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.710136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.710145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.710324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.710332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.710666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.710674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.711030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.711037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.711239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.711247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.711574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.711581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.711951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.711961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.712132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.712141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.712184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85a0f0 (9): Bad file descriptor 00:31:37.121 [2024-10-07 09:52:36.712824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.712921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.713231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.713269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.713579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.713590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.713916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.713924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.714096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.714103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.714181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.714188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.714376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.714384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.714471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.714478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.714635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.714643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.714955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.714962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.121 [2024-10-07 09:52:36.715288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.121 [2024-10-07 09:52:36.715296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.121 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.715521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.715529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.715842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.715849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.716149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.716157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.716336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.716343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.716769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.716778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.717129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.717136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.717303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.717311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.717614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.717631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.717975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.717983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.718273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.718281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.718597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.718606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.718930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.718938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.719262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.719270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.719587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.719596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.719921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.719929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.719973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.719980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.720270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.720278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.720590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.720598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.720768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.720777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.721098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.721107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.721144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.721151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.721223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.721230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.721463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.721475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.721675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.721682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.721971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.721979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.722316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.722325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.722647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.722655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.722986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.722993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.723319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.723326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.723646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.723653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.723970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.723977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.724301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.724308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.724481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.724489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.724781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.724788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.725122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.725129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.725444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.725451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.725809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.725817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.726120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.726128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.122 qpair failed and we were unable to recover it. 00:31:37.122 [2024-10-07 09:52:36.726446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.122 [2024-10-07 09:52:36.726454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.726775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.726782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.726973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.726980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.727361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.727369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.727691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.727698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.728017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.728024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.728386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.728394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.728726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.728734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.729134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.729142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.729419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.729427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.729748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.729755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.729945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.729953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.730342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.730349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.730553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.730560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.730780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.730796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.731073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.731080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.731433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.731440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.731769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.731777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.732117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.732124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.732298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.732305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.732593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.732600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.732653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.732660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.732889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.732896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.733220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.733227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.733548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.733556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.733756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.733764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.734099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.734107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.734318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.734327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.734658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.734666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.734864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.734871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.735157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.735165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.735506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.735513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.735688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.735696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.735891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.735898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.736213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.736220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.736551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.736559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.736795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.736803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.736866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.736873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.737061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.123 [2024-10-07 09:52:36.737069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.123 qpair failed and we were unable to recover it. 00:31:37.123 [2024-10-07 09:52:36.737265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.737272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.737709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.737718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.738032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.738039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.738216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.738223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.738539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.738546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.738858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.738875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.739101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.739108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.739304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.739311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.739584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.739591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.739761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.739770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.739971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.739978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.740168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.740175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.740443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.740450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.740631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.740638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.740830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.740836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.741056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.741064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.741385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.741394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.741689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.741697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.742030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.742037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.742208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.124 [2024-10-07 09:52:36.742215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.124 qpair failed and we were unable to recover it. 00:31:37.124 [2024-10-07 09:52:36.742379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.742388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.742680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.742691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.742864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.742871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.743214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.743221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.743553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.743561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.743891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.743898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.744219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.744226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.744420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.744428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.744763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.744770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.745230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.745236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.745559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.745567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.745769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.745777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.746137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.746145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.746217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.746224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.746516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.746523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.746704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.746712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.746999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.747006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.747328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.747335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.747522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.747531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.747839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.747847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.748181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.748188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.748505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.748512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.748896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.748904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.749071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.749079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.749262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.749270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.749605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.749613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.749819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.749827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.750003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.750011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.750310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.398 [2024-10-07 09:52:36.750317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.398 qpair failed and we were unable to recover it. 00:31:37.398 [2024-10-07 09:52:36.750657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.750665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.750991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.750998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.751314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.751321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.751533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.751541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.751714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.751721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.751914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.751921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.752085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.752092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.752277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.752285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.752624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.752632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.752974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.752981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.753314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.753321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.753651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.753658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.753841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.753848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.754181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.754188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.754504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.754511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.754685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.754693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.754883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.754890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.755179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.755186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.755528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.755535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.755838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.755845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.756164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.756170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.756369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.756375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.756767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.756775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.757095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.757102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.757455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.757462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.757675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.757682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.758018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.758025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.758366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.399 [2024-10-07 09:52:36.758373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.399 qpair failed and we were unable to recover it. 00:31:37.399 [2024-10-07 09:52:36.758699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.758706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.758887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.758895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.759073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.759080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.759374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.759381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.759563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.759577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.759742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.759750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.759935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.759942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.760328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.760342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.760495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.760502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.760696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.760704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.761017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.761024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.761450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.761457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.761772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.761779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.761990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.761998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.762313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.762320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.762540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.762547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.762911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.762918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.763216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.763224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.763560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.763567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.763652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.763659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.763946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.763954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.764270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.764277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.764596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.764603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.764936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.764944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.765252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.765260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.765452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.765459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.765755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.765763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.766073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.766080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.766406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.400 [2024-10-07 09:52:36.766414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.400 qpair failed and we were unable to recover it. 00:31:37.400 [2024-10-07 09:52:36.766753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.766761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.767075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.767094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.767288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.767297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.767624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.767633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.768010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.768017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.768316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.768323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.768523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.768530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.768769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.768776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.768987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.768995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.769183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.769190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.769479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.769486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.769828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.769835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.770024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.770031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.770417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.770424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.770730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.770737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.771072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.771080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.771403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.771411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.771732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.771739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.772131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.772139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.772471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.772478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.772794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.772802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.773168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.773175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.773571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.773578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.773740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.773747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.774059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.774067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.774387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.774394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.774735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.774743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.775057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.775064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.775278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.775285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.401 [2024-10-07 09:52:36.775498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.401 [2024-10-07 09:52:36.775505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.401 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.775805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.775812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.776133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.776141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.776510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.776518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.776828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.776835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.777027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.777035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.777201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.777208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.777302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.777309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.777627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.777635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.777959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.777966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.778291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.778299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.778631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.778638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.778812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.778819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.779238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.779245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.779409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.779417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.779608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.779621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.779810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.779818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.780111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.780121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.780313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.780320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.780527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.780534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.780879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.780886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.781069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.781076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.781391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.781398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.781580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.781587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.782037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.782044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.782232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.782239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.782273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.782280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.782471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.782478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.782758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.782766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.782925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.782933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.783211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.402 [2024-10-07 09:52:36.783218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.402 qpair failed and we were unable to recover it. 00:31:37.402 [2024-10-07 09:52:36.783403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.783411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.783595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.783602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.783915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.783922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.784360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.784367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.784554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.784560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.784752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.784760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.784957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.784964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.785285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.785292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.785614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.785629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.785835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.785842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.786194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.786202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.786571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.786578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.786887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.786894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.787216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.787224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.787425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.787432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.787759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.787767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.787837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.787844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.787982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.787989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.788306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.788314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.788506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.788513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.788812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.788819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.789149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.789156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.789314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.789322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.789606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.789614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.789796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.789803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.403 [2024-10-07 09:52:36.789994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.403 [2024-10-07 09:52:36.790001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.403 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.790280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.790287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.790467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.790477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.790641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.790648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.790877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.790884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.791060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.791067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.791452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.791459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.791833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.791840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.792013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.792021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.792366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.792373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.792551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.792559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.792905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.792912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.793261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.793268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.793434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.793441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.793720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.793728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.793888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.793896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.794212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.794220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.794511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.794520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.794700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.794707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.795023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.795031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.795363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.795370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.795700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.795707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.796019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.796027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.796341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.796349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.796430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.796437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.796624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.796633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.796952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.796959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.797268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.404 [2024-10-07 09:52:36.797275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.404 qpair failed and we were unable to recover it. 00:31:37.404 [2024-10-07 09:52:36.797585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.797592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.797909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.797919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.798245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.798252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.798417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.798425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.798589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.798597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.798759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.798768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.798939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.798946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.799260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.799267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.799593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.799600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.799838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.799845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.800036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.800050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.800365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.800372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.800545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.800552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.800750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.800758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.801139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.801146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.801453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.801465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.801765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.801772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.802095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.802103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.802414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.802421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.802751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.802759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.802981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.802987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.803201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.803208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.803428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.803435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.803773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.803780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.804095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.804103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.804278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.804286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.804624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.804632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.805016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.805024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.805342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.805350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.805675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.805683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.806099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.806106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.806435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.405 [2024-10-07 09:52:36.806442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.405 qpair failed and we were unable to recover it. 00:31:37.405 [2024-10-07 09:52:36.806756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.806763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.806953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.806961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.807289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.807296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.807597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.807604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.807787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.807795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.808108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.808115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.808420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.808427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.808751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.808758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.809070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.809078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.809287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.809294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.809672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.809682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.810001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.810008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.810332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.810339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.810712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.810720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.810802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.810809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.811128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.811136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.811409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.811417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.811621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.811628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.811915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.811922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.812230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.812237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.812556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.812563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.812921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.812929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.813248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.813255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.813580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.813587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.813909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.813917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.814253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.814261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.814562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.814570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.814683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.814690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.814943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.814950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.406 qpair failed and we were unable to recover it. 00:31:37.406 [2024-10-07 09:52:36.815355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.406 [2024-10-07 09:52:36.815363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.815537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.815546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.815835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.815843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.816005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.816013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.816327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.816335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.816625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.816633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.816825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.816832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.817013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.817020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.817232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.817243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.817454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.817462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.817659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.817666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.817997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.818004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.818158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.818165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.818493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.818500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.818871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.818878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.819296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.819304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.819626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.819634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.819930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.819937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.820159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.820166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.820360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.820367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.820696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.820703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.821032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.821039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.821352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.821360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.821647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.821654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.821835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.821843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.822178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.822185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.822495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.822502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.822828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.822835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.823012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.407 [2024-10-07 09:52:36.823019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.407 qpair failed and we were unable to recover it. 00:31:37.407 [2024-10-07 09:52:36.823299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.823307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.823628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.823635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.823814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.823821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.824107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.824114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.824311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.824319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.824695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.824703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.825031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.825038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.825367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.825374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.825628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.825635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.825838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.825845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.826191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.826197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.826498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.826505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.826836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.826843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.827166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.827174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.827487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.827494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.827662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.827669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.827957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.827965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.828281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.828288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.828322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.828328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.828736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.828744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.828906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.828916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.829194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.829203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.829404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.829412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.829733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.829741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.830050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.830057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.830239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.830245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.830578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.830585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.830910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.830917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.831103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.831111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.831416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.831423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.408 qpair failed and we were unable to recover it. 00:31:37.408 [2024-10-07 09:52:36.831743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.408 [2024-10-07 09:52:36.831751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.831983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.831990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.832290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.832297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.832589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.832597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.832759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.832767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.832956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.832963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.833159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.833166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.833542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.833549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.833861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.833869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.834043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.834051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.834380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.834387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.834564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.834571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.834853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.834861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.835070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.835077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.835312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.835319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.835697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.835704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.835879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.835886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.836258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.836266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.836598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.836605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.836693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.836701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.836980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.836987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.837333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.837340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.837659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.837667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.837982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.837989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.409 [2024-10-07 09:52:36.838203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.409 [2024-10-07 09:52:36.838211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.409 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.838534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.838542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.838606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.838613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.838715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.838722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.838990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.838998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.839313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.839321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.839665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.839672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.839911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.839918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.840272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.840278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.840451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.840458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.840785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.840793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.840953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.840959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.841346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.841441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.841727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.841782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.842138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.842169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.842488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.842497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.842830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.842837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.843165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.843173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.843503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.843510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.843819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.843826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.844142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.844148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.844474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.844482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.844822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.844829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.845002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.845009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.845247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.845254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.845510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.845517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.845829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.845836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.846195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.846202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.846378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.846384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.846749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.410 [2024-10-07 09:52:36.846756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.410 qpair failed and we were unable to recover it. 00:31:37.410 [2024-10-07 09:52:36.847054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.847061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.847385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.847393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.847549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.847557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.847882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.847889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.848057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.848066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.848232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.848239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.848413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.848421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.848614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.848625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.848843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.848850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.849072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.849078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.849250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.849257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.849557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.849564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.849763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.849771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.850126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.850133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.850467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.850474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.850639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.850647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.850951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.850957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.851247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.851254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.851594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.851601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.851924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.851931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.852266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.852273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.852668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.852676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.852995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.853002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.853208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.853215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.853403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.853410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.853769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.853776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.854088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.854095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.854431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.854438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.854763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.854771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.411 [2024-10-07 09:52:36.855086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.411 [2024-10-07 09:52:36.855094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.411 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.855413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.855420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.855742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.855751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.856069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.856076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.856407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.856414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.856727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.856734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.856923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.856930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.857321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.857328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.857476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.857484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.857644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.857652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.857938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.857945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.858104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.858111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.858415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.858423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.858597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.858605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.858929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.858936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.859237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.859252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.859421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.859429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.859778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.859785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.860090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.860097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.860218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.860225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.860526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.860533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.860849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.860856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.861198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.861205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.861538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.861546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.861954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.861963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.862352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.862359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.862656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.862665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.862991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.862998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.863317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.863324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.863523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.412 [2024-10-07 09:52:36.863530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.412 qpair failed and we were unable to recover it. 00:31:37.412 [2024-10-07 09:52:36.863851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.863858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.864175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.864182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.864225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.864231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.864504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.864512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.864686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.864694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.864866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.864874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.865061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.865068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.865236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.865243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.865521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.865528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.865802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.865810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.866166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.866174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.866366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.866373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.866548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.866554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.866748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.866758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.866956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.866963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.867211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.867218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.867553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.867561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.867888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.867897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.868210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.868218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.868575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.868583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.868873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.868880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.869198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.869206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.869512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.869520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.869843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.869851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.870035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.870043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.870129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.870135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.870429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.870437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.870756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.870764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.871089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.871097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.871408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.871415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.871629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.413 [2024-10-07 09:52:36.871636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.413 qpair failed and we were unable to recover it. 00:31:37.413 [2024-10-07 09:52:36.871715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.871721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.871992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.872000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.872320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.872327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.872644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.872651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.872689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.872696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.872999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.873006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.873325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.873332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.873508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.873515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.873805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.873812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.873995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.874002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.874199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.874206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.874393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.874402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.874581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.874587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.874914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.874922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.875237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.875243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.875567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.875574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.875911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.875919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.876237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.876245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.876572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.876579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.876881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.876888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.877069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.877076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.877254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.877260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.877540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.877547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.877872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.877879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.878198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.878206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.878523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.878530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.878850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.878858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.879031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.879040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.879357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.879365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.879723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.414 [2024-10-07 09:52:36.879730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.414 qpair failed and we were unable to recover it. 00:31:37.414 [2024-10-07 09:52:36.880067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.880074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.880236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.880244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.880565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.880572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.880655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.880661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.880945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.880952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.881269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.881276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.881677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.881685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.881911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.881918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.882236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.882243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.882569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.882576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.882751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.882759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.882922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.882929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.883321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.883415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.883953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.884051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.884340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.884378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.884569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.884578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.884902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.884909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.885208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.885223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.885533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.885540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.885851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.885859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.886018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.886029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.886312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.886319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.415 [2024-10-07 09:52:36.886620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.415 [2024-10-07 09:52:36.886628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.415 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.886944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.886951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.886985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.886991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.887159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.887166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.887357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.887364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.887675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.887682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.888057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.888064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.888384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.888391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.888724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.888731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.889064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.889071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.889364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.889371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.889697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.889704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.889915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.889923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.890093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.890100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.890427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.890434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.890729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.890737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.891055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.891063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.891391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.891398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.891696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.891704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.892121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.892128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.892294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.892301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.892467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.892473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.892679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.892687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.892856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.892864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.893019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.893026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.893375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.893383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.893577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.893584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.893860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.893868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.894181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.894188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.894344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.894351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.894634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.416 [2024-10-07 09:52:36.894641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.416 qpair failed and we were unable to recover it. 00:31:37.416 [2024-10-07 09:52:36.894839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.894846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.895019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.895026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.895196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.895203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.895374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.895381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.895686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.895693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.895992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.896000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.896312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.896319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.896518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.896525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.896729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.896736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.897055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.897062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.897378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.897391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.897633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.897640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.897837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.897844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.898043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.898050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.898358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.898365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.898701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.898709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.899043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.899050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.899379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.899386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.899684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.899692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.899868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.899875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.900190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.900197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.900521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.900528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.900845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.900852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.901166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.901173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.901474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.901490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.901812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.901820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.901994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.902001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.902271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.902279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.417 [2024-10-07 09:52:36.902459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.417 [2024-10-07 09:52:36.902466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.417 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.902820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.902828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.903156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.903162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.903331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.903338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.903413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.903419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.903573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.903580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.903881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.903889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.904199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.904208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.904524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.904531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.904853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.904861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.905195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.905203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.905556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.905562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.905734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.905742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.906139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.906146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.906461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.906468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.906804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.906811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.906964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.906971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.907258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.907273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.907588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.907594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.907723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.907730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.907994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.908001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.908174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.908182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.908513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.908521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.908831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.908838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.909162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.909169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.909485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.909493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.909779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.909786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.910112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.910120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.910436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.910443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.910766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.418 [2024-10-07 09:52:36.910773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.418 qpair failed and we were unable to recover it. 00:31:37.418 [2024-10-07 09:52:36.911090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.911097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.911265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.911272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.911433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.911440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.911483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.911490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.911798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.911806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.912123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.912130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.912327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.912334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.912694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.912701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.912943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.912950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.913130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.913137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.913460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.913467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.913646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.913654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.913822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.913829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.913987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.913994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.914330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.914337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.914509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.914516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.914878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.914885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.915077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.915085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.915382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.915389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.915685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.915692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.916042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.916049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.916405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.916412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.916702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.916709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.917032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.917039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.917362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.917369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.917701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.917708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.917994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.918001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.918330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.918337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.918658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.918665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.419 qpair failed and we were unable to recover it. 00:31:37.419 [2024-10-07 09:52:36.918996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.419 [2024-10-07 09:52:36.919003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.919170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.919177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.919403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.919410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.919648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.919655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.919843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.919850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.920254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.920262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.920581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.920588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.920747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.920754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.920968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.920975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.921259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.921266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.921441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.921448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.921630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.921637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.921849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.921857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.922178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.922185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.922502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.922509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.922787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.922794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.923159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.923168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.923366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.923373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.923634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.923641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.923950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.923957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.924279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.924287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.924451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.924464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.924780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.924787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.925093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.925101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.925323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.925330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.925671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.925678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.925999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.926006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.926346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.926354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.926573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.926581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.926872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.926879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.927226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.927234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.420 qpair failed and we were unable to recover it. 00:31:37.420 [2024-10-07 09:52:36.927422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.420 [2024-10-07 09:52:36.927429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.927752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.927759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.928092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.928099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.928483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.928489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.928891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.928898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.929034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.929041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.929265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.929272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.929623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.929630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.929931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.929939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.929981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.929988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.930312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.930319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.930635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.930642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.930975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.930982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.931320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.931327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.931648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.931656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.931827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.931834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.932173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.932181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.932494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.932502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.932841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.932849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.933271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.933280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.933590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.933597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.933786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.933794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.934167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.934174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.934582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.934589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.421 [2024-10-07 09:52:36.934907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.421 [2024-10-07 09:52:36.934915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.421 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.935118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.935126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.935454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.935464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.935640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.935648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.935839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.935846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.936207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.936215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.936416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.936424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.936851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.936858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.937039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.937045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.937329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.937335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.937757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.937765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.938092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.938099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.938140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.938146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.938434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.938441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.938475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.938482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.938702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.938709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.939021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.939028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.939375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.939383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.939721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.939729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.940053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.940060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.940256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.940262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.940670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.940678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.941023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.941030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.941316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.941323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.941657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.941665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.941969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.942014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.942333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.942340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.942676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.942685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.943010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.943018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.943338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.943347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.422 [2024-10-07 09:52:36.943391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.422 [2024-10-07 09:52:36.943398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.422 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.943697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.943704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.944131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.944138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.944480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.944487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.944706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.944714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.944817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.944824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.944980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.944987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.945320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.945327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.945645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.945653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.945838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.945845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.946131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.946138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.946315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.946322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.946645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.946652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.947020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.947027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.947345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.947351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.947672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.947679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.947988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.947995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.948325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.948332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.948657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.948665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.948864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.948870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.949222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.949228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.949545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.949559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.949746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.949754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.949926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.949933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.950249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.950257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.950416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.950424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.950706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.950713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.951067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.951074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.951388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.951396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.951556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.951564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.423 [2024-10-07 09:52:36.951734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.423 [2024-10-07 09:52:36.951742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.423 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.952123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.952130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.952424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.952432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.952756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.952763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.953090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.953098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.953136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.953143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.953309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.953316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.953614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.953626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.953936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.953943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.954272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.954278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.954631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.954641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.954677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.954685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.954833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.954841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.955179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.955186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.955511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.955518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.955764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.955771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.956083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.956090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.956446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.956454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.956766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.956773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.956928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.956936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.957219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.957226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.957529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.957536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.957732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.957739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.957922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.957928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.958247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.958254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.958581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.958587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.958916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.958924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.959089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.959096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.959267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.959273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.959432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.959439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.424 qpair failed and we were unable to recover it. 00:31:37.424 [2024-10-07 09:52:36.959770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.424 [2024-10-07 09:52:36.959778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.959967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.959974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.960142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.960149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.960366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.960373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.960556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.960563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.960783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.960790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.961200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.961207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.961519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.961527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.961832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.961840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.962183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.962190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.962351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.962358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.962685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.962692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.962730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.962737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.963037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.963044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.963198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.963204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.963534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.963541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.963851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.963859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.964169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.964176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.964491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.964498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.964661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.964668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.964868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.964876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.965046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.965054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.965267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.965274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.965453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.965460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.965863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.965870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.966205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.966211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.966533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.966540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.966712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.966719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.966941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.425 [2024-10-07 09:52:36.966948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.425 qpair failed and we were unable to recover it. 00:31:37.425 [2024-10-07 09:52:36.967130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.967136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.967443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.967450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.967628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.967635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.967825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.967837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.968071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.968078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.968287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.968294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.968469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.968476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.968851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.968858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.969183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.969189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.969508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.969515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.969832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.969839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.970185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.970192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.970511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.970518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.970588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.970595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.970909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.970916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.971113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.971119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.971329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.971336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.971515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.971522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.971817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.971824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.972158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.972167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.972487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.972494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.972675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.972682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.973013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.973020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.973378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.973384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.973433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.973440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.973721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.973728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.974068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.974074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.974263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.974269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.974562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.974569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.426 [2024-10-07 09:52:36.974934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.426 [2024-10-07 09:52:36.974942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.426 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.975263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.975270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.975595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.975602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.975932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.975939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.976134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.976141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.976318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.976325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.976680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.976690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.977014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.977020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.977220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.977227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.977543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.977550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.977873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.977880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.978209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.978216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.978373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.978379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.978707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.978714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.979000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.979007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.979196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.979203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.979544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.979550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.979882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.979891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.980248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.980254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.980459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.980466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.980674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.980681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.980839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.980845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.980933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.980940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.981219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.981226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.427 qpair failed and we were unable to recover it. 00:31:37.427 [2024-10-07 09:52:36.981555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.427 [2024-10-07 09:52:36.981562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.981756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.981763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.982098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.982105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.982418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.982424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.982737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.982744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.983062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.983069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.983387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.983394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.983694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.983702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.983890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.983897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.984071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.984077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.984389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.984396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.984761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.984768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.985083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.985090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.985289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.985296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.985574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.985584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.985784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.985791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.986113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.986120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.986162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.986168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.986338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.986345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.986509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.986516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.986554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.986561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.986762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.986769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.987086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.987093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.987254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.987261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.987532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.987539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.987700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.987707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.988001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.988008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.988187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.988193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.988517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.988524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.988705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.988712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.428 [2024-10-07 09:52:36.989026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.428 [2024-10-07 09:52:36.989033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.428 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.989343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.989350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.989673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.989680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.989991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.989998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.990169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.990178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.990526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.990533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.990570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.990576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.990896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.990903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.990959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.990965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.991272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.991279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.991440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.991447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.991826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.991833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.992150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.992157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.992442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.992450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.992783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.992791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.993121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.993128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.429 qpair failed and we were unable to recover it. 00:31:37.429 [2024-10-07 09:52:36.993289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.429 [2024-10-07 09:52:36.993298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@867 -- # return 0 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@733 -- # xtrace_disable 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:37.692 Malloc0 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:37.692 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:37.692 qpair failed and we were unable to recover it. 00:31:37.692 [2024-10-07 09:52:37.181108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.692 [2024-10-07 09:52:37.181198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.692 qpair failed and we were unable to recover it. 00:31:37.692 [2024-10-07 09:52:37.181513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.692 [2024-10-07 09:52:37.181549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.692 qpair failed and we were unable to recover it. 00:31:37.692 [2024-10-07 09:52:37.181924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.692 [2024-10-07 09:52:37.181956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.692 qpair failed and we were unable to recover it. 00:31:37.692 [2024-10-07 09:52:37.182204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.692 [2024-10-07 09:52:37.182240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.692 qpair failed and we were unable to recover it. 00:31:37.692 [2024-10-07 09:52:37.182584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.692 [2024-10-07 09:52:37.182613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.692 qpair failed and we were unable to recover it. 00:31:37.692 [2024-10-07 09:52:37.183017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.692 [2024-10-07 09:52:37.183107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.692 qpair failed and we were unable to recover it. 00:31:37.692 [2024-10-07 09:52:37.183527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.692 [2024-10-07 09:52:37.183563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.692 qpair failed and we were unable to recover it. 00:31:37.692 [2024-10-07 09:52:37.184012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.692 [2024-10-07 09:52:37.184104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.692 qpair failed and we were unable to recover it. 00:31:37.692 [2024-10-07 09:52:37.184408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.693 [2024-10-07 09:52:37.184516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.184563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.184936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.184968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.185422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.185452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.185678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.185709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.185937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.185966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.186366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.186396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.186655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.186684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.186897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.186927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.187274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.187303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.187648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.187679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.188097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.188126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.188224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.188251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c550 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.188641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.188734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.189166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.189202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.189581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.189612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.189992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.190023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.190390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.190419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.190911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.190999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.191296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.191333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.191586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.191631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.192055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.192085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.192440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.192470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.192859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.192890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.193125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.193154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 [2024-10-07 09:52:37.193375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.193404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:37.693 [2024-10-07 09:52:37.193766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.193796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:37.693 [2024-10-07 09:52:37.194155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:37.693 [2024-10-07 09:52:37.194184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.693 qpair failed and we were unable to recover it. 00:31:37.693 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:37.693 [2024-10-07 09:52:37.194533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.693 [2024-10-07 09:52:37.194562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.194969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.194999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.195232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.195265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.195518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.195547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.195881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.195912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.196242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.196271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.196491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.196519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.196959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.196990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.197345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.197374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.197627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.197662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.198022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.198052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.198407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.198436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.198696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.198728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.198959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.198988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.199217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.199246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.199579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.199607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.199979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.200009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.200265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.200294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.200714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.200745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.200927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.200955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.201266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.201295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.201653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.201684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.201876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.201904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.202248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.202277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.202605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.202642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.202911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.202942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.203291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.203320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.203569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.203604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.203939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.203969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.694 [2024-10-07 09:52:37.204328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.694 [2024-10-07 09:52:37.204357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.694 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.204557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.204585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.204967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.204997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.205318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.205346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:37.695 [2024-10-07 09:52:37.205606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.205641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:37.695 [2024-10-07 09:52:37.205964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.205993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:37.695 [2024-10-07 09:52:37.206235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.206271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:37.695 [2024-10-07 09:52:37.206609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.206654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.207084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.207113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.207369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.207397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.207786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.207818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.208035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.208063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.208304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.208333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.208588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.208631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.208883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.208912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.209269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.209297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.209647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.209677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.209889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.209918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.210158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.210186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.210525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.210554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.210958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.210989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.211123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.211156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.211545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.211575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.212000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.212031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.212258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.212287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.212585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.212614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.212959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.695 [2024-10-07 09:52:37.212988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.695 qpair failed and we were unable to recover it. 00:31:37.695 [2024-10-07 09:52:37.213230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.213258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.213627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.213657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.213747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.213775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.213976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.214003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.214342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.214371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.214734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.214765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.215113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.215143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.215354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.215383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.215661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.215691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.216074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.216103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.216336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.216364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.216583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.216612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.217099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.217129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.217339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.217368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.217481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.217511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd64000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:37.696 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.696 [2024-10-07 09:52:37.218088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.218180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:37.696 [2024-10-07 09:52:37.218445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.218483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:37.696 [2024-10-07 09:52:37.219047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.219140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.219462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.219499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.219940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.219974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.220310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.220339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.220568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.220597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.220970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.221001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.221236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.221264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.221630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.221661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.222007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.222036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.222370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.222398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.696 qpair failed and we were unable to recover it. 00:31:37.696 [2024-10-07 09:52:37.222754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.696 [2024-10-07 09:52:37.222785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.223024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.697 [2024-10-07 09:52:37.223052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.223436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.697 [2024-10-07 09:52:37.223465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.223833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.697 [2024-10-07 09:52:37.223864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.224104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.697 [2024-10-07 09:52:37.224132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.224500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.697 [2024-10-07 09:52:37.224529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd70000b90 with addr=10.0.0.2, port=4420 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.224687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.697 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:37.697 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:37.697 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:37.697 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:37.697 [2024-10-07 09:52:37.235366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.697 [2024-10-07 09:52:37.235505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.697 [2024-10-07 09:52:37.235552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.697 [2024-10-07 09:52:37.235575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.697 [2024-10-07 09:52:37.235597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.697 [2024-10-07 09:52:37.235658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:37.697 09:52:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3552939 00:31:37.697 [2024-10-07 09:52:37.245235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.697 [2024-10-07 09:52:37.245352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.697 [2024-10-07 09:52:37.245382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.697 [2024-10-07 09:52:37.245397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.697 [2024-10-07 09:52:37.245411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.697 [2024-10-07 09:52:37.245440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.255316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.697 [2024-10-07 09:52:37.255382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.697 [2024-10-07 09:52:37.255401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.697 [2024-10-07 09:52:37.255412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.697 [2024-10-07 09:52:37.255421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.697 [2024-10-07 09:52:37.255441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.265325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.697 [2024-10-07 09:52:37.265416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.697 [2024-10-07 09:52:37.265430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.697 [2024-10-07 09:52:37.265437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.697 [2024-10-07 09:52:37.265443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.697 [2024-10-07 09:52:37.265458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.275306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.697 [2024-10-07 09:52:37.275361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.697 [2024-10-07 09:52:37.275374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.697 [2024-10-07 09:52:37.275381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.697 [2024-10-07 09:52:37.275388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.697 [2024-10-07 09:52:37.275402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.285242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.697 [2024-10-07 09:52:37.285292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.697 [2024-10-07 09:52:37.285306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.697 [2024-10-07 09:52:37.285314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.697 [2024-10-07 09:52:37.285320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.697 [2024-10-07 09:52:37.285334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.295223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.697 [2024-10-07 09:52:37.295322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.697 [2024-10-07 09:52:37.295335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.697 [2024-10-07 09:52:37.295342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.697 [2024-10-07 09:52:37.295349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.697 [2024-10-07 09:52:37.295363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.697 qpair failed and we were unable to recover it. 00:31:37.697 [2024-10-07 09:52:37.305364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.697 [2024-10-07 09:52:37.305428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.697 [2024-10-07 09:52:37.305441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.697 [2024-10-07 09:52:37.305452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.697 [2024-10-07 09:52:37.305458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.698 [2024-10-07 09:52:37.305473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.698 qpair failed and we were unable to recover it. 00:31:37.698 [2024-10-07 09:52:37.315396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.698 [2024-10-07 09:52:37.315449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.698 [2024-10-07 09:52:37.315464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.698 [2024-10-07 09:52:37.315471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.698 [2024-10-07 09:52:37.315477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.698 [2024-10-07 09:52:37.315492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.698 qpair failed and we were unable to recover it. 00:31:37.698 [2024-10-07 09:52:37.325379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.698 [2024-10-07 09:52:37.325425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.698 [2024-10-07 09:52:37.325438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.698 [2024-10-07 09:52:37.325445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.698 [2024-10-07 09:52:37.325451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.698 [2024-10-07 09:52:37.325466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.698 qpair failed and we were unable to recover it. 00:31:37.698 [2024-10-07 09:52:37.335454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.698 [2024-10-07 09:52:37.335502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.698 [2024-10-07 09:52:37.335516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.698 [2024-10-07 09:52:37.335523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.698 [2024-10-07 09:52:37.335529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.698 [2024-10-07 09:52:37.335543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.698 qpair failed and we were unable to recover it. 00:31:37.698 [2024-10-07 09:52:37.345470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.698 [2024-10-07 09:52:37.345527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.698 [2024-10-07 09:52:37.345541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.698 [2024-10-07 09:52:37.345548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.698 [2024-10-07 09:52:37.345555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.698 [2024-10-07 09:52:37.345569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.698 qpair failed and we were unable to recover it. 00:31:37.960 [2024-10-07 09:52:37.355496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.960 [2024-10-07 09:52:37.355552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.960 [2024-10-07 09:52:37.355566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.960 [2024-10-07 09:52:37.355573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.960 [2024-10-07 09:52:37.355580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.960 [2024-10-07 09:52:37.355594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-10-07 09:52:37.365461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.960 [2024-10-07 09:52:37.365542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.960 [2024-10-07 09:52:37.365555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.960 [2024-10-07 09:52:37.365562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.960 [2024-10-07 09:52:37.365569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.960 [2024-10-07 09:52:37.365583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-10-07 09:52:37.375534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.960 [2024-10-07 09:52:37.375585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.960 [2024-10-07 09:52:37.375599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.960 [2024-10-07 09:52:37.375606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.960 [2024-10-07 09:52:37.375612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.960 [2024-10-07 09:52:37.375631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-10-07 09:52:37.385565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.960 [2024-10-07 09:52:37.385626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.960 [2024-10-07 09:52:37.385640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.960 [2024-10-07 09:52:37.385648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.960 [2024-10-07 09:52:37.385654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.960 [2024-10-07 09:52:37.385669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-10-07 09:52:37.395522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.960 [2024-10-07 09:52:37.395587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.960 [2024-10-07 09:52:37.395601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.960 [2024-10-07 09:52:37.395611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.960 [2024-10-07 09:52:37.395622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.960 [2024-10-07 09:52:37.395637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-10-07 09:52:37.405439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.960 [2024-10-07 09:52:37.405488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.960 [2024-10-07 09:52:37.405502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.960 [2024-10-07 09:52:37.405509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.960 [2024-10-07 09:52:37.405515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.960 [2024-10-07 09:52:37.405529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-10-07 09:52:37.415637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.960 [2024-10-07 09:52:37.415732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.960 [2024-10-07 09:52:37.415747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.960 [2024-10-07 09:52:37.415754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.960 [2024-10-07 09:52:37.415760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.960 [2024-10-07 09:52:37.415775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-10-07 09:52:37.425547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.960 [2024-10-07 09:52:37.425606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.960 [2024-10-07 09:52:37.425624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.960 [2024-10-07 09:52:37.425631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.960 [2024-10-07 09:52:37.425638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.960 [2024-10-07 09:52:37.425652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.960 qpair failed and we were unable to recover it. 00:31:37.960 [2024-10-07 09:52:37.435712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.960 [2024-10-07 09:52:37.435803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.960 [2024-10-07 09:52:37.435816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.960 [2024-10-07 09:52:37.435823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.960 [2024-10-07 09:52:37.435829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.435844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.445677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.445731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.445745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.445751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.445758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.445772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.455717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.455764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.455777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.455785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.455792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.455806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.465779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.465835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.465848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.465855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.465862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.465876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.475781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.475838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.475851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.475858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.475864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.475878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.485848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.485921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.485938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.485945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.485951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.485965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.495744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.495795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.495809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.495816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.495822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.495836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.505793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.505857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.505871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.505878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.505885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.505899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.516041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.516105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.516118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.516126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.516132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.516146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.525944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.525996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.526009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.526016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.526023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.526040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.535884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.535939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.535953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.535960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.535966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.535980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.546064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.546123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.546136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.546143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.546149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.546163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.556055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.556113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.556126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.556134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.556140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.556154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.565986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.566031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.961 [2024-10-07 09:52:37.566044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.961 [2024-10-07 09:52:37.566051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.961 [2024-10-07 09:52:37.566057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.961 [2024-10-07 09:52:37.566071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.961 qpair failed and we were unable to recover it. 00:31:37.961 [2024-10-07 09:52:37.576085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.961 [2024-10-07 09:52:37.576140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.962 [2024-10-07 09:52:37.576156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.962 [2024-10-07 09:52:37.576163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.962 [2024-10-07 09:52:37.576170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.962 [2024-10-07 09:52:37.576184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-10-07 09:52:37.586122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.962 [2024-10-07 09:52:37.586176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.962 [2024-10-07 09:52:37.586189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.962 [2024-10-07 09:52:37.586197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.962 [2024-10-07 09:52:37.586203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.962 [2024-10-07 09:52:37.586217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-10-07 09:52:37.596165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.962 [2024-10-07 09:52:37.596219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.962 [2024-10-07 09:52:37.596233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.962 [2024-10-07 09:52:37.596239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.962 [2024-10-07 09:52:37.596246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.962 [2024-10-07 09:52:37.596259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-10-07 09:52:37.606133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.962 [2024-10-07 09:52:37.606222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.962 [2024-10-07 09:52:37.606235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.962 [2024-10-07 09:52:37.606243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.962 [2024-10-07 09:52:37.606249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.962 [2024-10-07 09:52:37.606263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.962 qpair failed and we were unable to recover it. 00:31:37.962 [2024-10-07 09:52:37.616201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.962 [2024-10-07 09:52:37.616253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.962 [2024-10-07 09:52:37.616267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.962 [2024-10-07 09:52:37.616275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.962 [2024-10-07 09:52:37.616281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:37.962 [2024-10-07 09:52:37.616299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:37.962 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.626236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.626332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.626346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.626353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.626360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.626374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.636273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.636327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.636342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.636349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.636356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.636370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.646123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.646169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.646183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.646190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.646197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.646211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.656321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.656371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.656385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.656392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.656398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.656412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.666343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.666405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.666430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.666440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.666447] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.666465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.676388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.676456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.676480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.676489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.676496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.676515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.686354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.686401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.686418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.686425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.686432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.686447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.696407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.696461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.696476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.696483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.696489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.696503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.706330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.706390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.706404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.706411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.706426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.706440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.716438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.716496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.716512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.716520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.716526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.716541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.726449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.726531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.726544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.726551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.726558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.726572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.736488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.736540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.736554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.736561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.736567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.736581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.225 [2024-10-07 09:52:37.746569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.225 [2024-10-07 09:52:37.746630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.225 [2024-10-07 09:52:37.746644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.225 [2024-10-07 09:52:37.746651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.225 [2024-10-07 09:52:37.746658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.225 [2024-10-07 09:52:37.746672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.225 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.756584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.756654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.756668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.756675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.756682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.756696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.766583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.766631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.766644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.766651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.766658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.766672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.776646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.776695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.776708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.776715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.776722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.776736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.786682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.786742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.786756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.786763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.786770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.786784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.796763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.796823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.796837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.796848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.796855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.796869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.806684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.806735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.806748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.806756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.806762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.806776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.816740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.816794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.816808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.816816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.816822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.816837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.826794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.826847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.826860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.826867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.826874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.826888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.836711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.836806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.836820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.836827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.836833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.836848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.846802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.846891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.846904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.846911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.846918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.846931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.856875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.856925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.856939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.856946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.856952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.856966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.866788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.866851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.866866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.866873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.866879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.866894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.226 [2024-10-07 09:52:37.876908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.226 [2024-10-07 09:52:37.876967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.226 [2024-10-07 09:52:37.876981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.226 [2024-10-07 09:52:37.876988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.226 [2024-10-07 09:52:37.876995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.226 [2024-10-07 09:52:37.877009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.226 qpair failed and we were unable to recover it. 00:31:38.490 [2024-10-07 09:52:37.886911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.490 [2024-10-07 09:52:37.886957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.490 [2024-10-07 09:52:37.886970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.490 [2024-10-07 09:52:37.886981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.490 [2024-10-07 09:52:37.886987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.490 [2024-10-07 09:52:37.887001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.490 qpair failed and we were unable to recover it. 00:31:38.490 [2024-10-07 09:52:37.896972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.490 [2024-10-07 09:52:37.897025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.490 [2024-10-07 09:52:37.897039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.490 [2024-10-07 09:52:37.897046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.490 [2024-10-07 09:52:37.897052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.490 [2024-10-07 09:52:37.897066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.490 qpair failed and we were unable to recover it. 00:31:38.490 [2024-10-07 09:52:37.907005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.490 [2024-10-07 09:52:37.907061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.490 [2024-10-07 09:52:37.907074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.490 [2024-10-07 09:52:37.907081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.490 [2024-10-07 09:52:37.907088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.490 [2024-10-07 09:52:37.907102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.490 qpair failed and we were unable to recover it. 00:31:38.490 [2024-10-07 09:52:37.917055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.490 [2024-10-07 09:52:37.917108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.490 [2024-10-07 09:52:37.917121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.490 [2024-10-07 09:52:37.917128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.490 [2024-10-07 09:52:37.917135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.490 [2024-10-07 09:52:37.917149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.490 qpair failed and we were unable to recover it. 00:31:38.490 [2024-10-07 09:52:37.926892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.490 [2024-10-07 09:52:37.926954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.490 [2024-10-07 09:52:37.926968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.490 [2024-10-07 09:52:37.926975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.490 [2024-10-07 09:52:37.926981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.490 [2024-10-07 09:52:37.926995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.490 qpair failed and we were unable to recover it. 00:31:38.490 [2024-10-07 09:52:37.937099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.490 [2024-10-07 09:52:37.937148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.490 [2024-10-07 09:52:37.937161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.490 [2024-10-07 09:52:37.937169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.490 [2024-10-07 09:52:37.937175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.490 [2024-10-07 09:52:37.937189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.490 qpair failed and we were unable to recover it. 00:31:38.490 [2024-10-07 09:52:37.947120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.490 [2024-10-07 09:52:37.947177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.490 [2024-10-07 09:52:37.947190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.490 [2024-10-07 09:52:37.947197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.490 [2024-10-07 09:52:37.947203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.490 [2024-10-07 09:52:37.947217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.490 qpair failed and we were unable to recover it. 00:31:38.490 [2024-10-07 09:52:37.957155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.490 [2024-10-07 09:52:37.957209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.490 [2024-10-07 09:52:37.957223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.490 [2024-10-07 09:52:37.957230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.490 [2024-10-07 09:52:37.957236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.490 [2024-10-07 09:52:37.957250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.490 qpair failed and we were unable to recover it. 00:31:38.490 [2024-10-07 09:52:37.967142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.490 [2024-10-07 09:52:37.967227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.490 [2024-10-07 09:52:37.967241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.490 [2024-10-07 09:52:37.967248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.490 [2024-10-07 09:52:37.967255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.490 [2024-10-07 09:52:37.967269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.490 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:37.977170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:37.977227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:37.977243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:37.977250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:37.977257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:37.977271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:37.987236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:37.987291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:37.987304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:37.987311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:37.987318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:37.987332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:37.997257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:37.997324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:37.997337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:37.997345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:37.997351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:37.997365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.007219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.007263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.007277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:38.007284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:38.007290] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:38.007305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.017314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.017370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.017395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:38.017404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:38.017411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:38.017435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.027362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.027472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.027497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:38.027506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:38.027513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:38.027532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.037428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.037483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.037499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:38.037506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:38.037513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:38.037528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.047364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.047409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.047424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:38.047431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:38.047438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:38.047453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.057431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.057484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.057498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:38.057505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:38.057512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:38.057526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.067455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.067537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.067555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:38.067563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:38.067570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:38.067585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.077512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.077570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.077583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:38.077591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:38.077597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:38.077611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.087482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.087529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.087543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:38.087550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:38.087556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:38.087570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.097499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.097583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.097597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.491 [2024-10-07 09:52:38.097604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.491 [2024-10-07 09:52:38.097610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.491 [2024-10-07 09:52:38.097628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.491 qpair failed and we were unable to recover it. 00:31:38.491 [2024-10-07 09:52:38.107575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.491 [2024-10-07 09:52:38.107634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.491 [2024-10-07 09:52:38.107647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.492 [2024-10-07 09:52:38.107654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.492 [2024-10-07 09:52:38.107661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.492 [2024-10-07 09:52:38.107679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.492 qpair failed and we were unable to recover it. 00:31:38.492 [2024-10-07 09:52:38.117601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.492 [2024-10-07 09:52:38.117657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.492 [2024-10-07 09:52:38.117671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.492 [2024-10-07 09:52:38.117678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.492 [2024-10-07 09:52:38.117685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.492 [2024-10-07 09:52:38.117699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.492 qpair failed and we were unable to recover it. 00:31:38.492 [2024-10-07 09:52:38.127448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.492 [2024-10-07 09:52:38.127494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.492 [2024-10-07 09:52:38.127509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.492 [2024-10-07 09:52:38.127516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.492 [2024-10-07 09:52:38.127523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.492 [2024-10-07 09:52:38.127538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.492 qpair failed and we were unable to recover it. 00:31:38.492 [2024-10-07 09:52:38.137512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.492 [2024-10-07 09:52:38.137570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.492 [2024-10-07 09:52:38.137584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.492 [2024-10-07 09:52:38.137591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.492 [2024-10-07 09:52:38.137597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.492 [2024-10-07 09:52:38.137611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.492 qpair failed and we were unable to recover it. 00:31:38.492 [2024-10-07 09:52:38.147705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.492 [2024-10-07 09:52:38.147762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.492 [2024-10-07 09:52:38.147776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.492 [2024-10-07 09:52:38.147783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.492 [2024-10-07 09:52:38.147789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.492 [2024-10-07 09:52:38.147804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.492 qpair failed and we were unable to recover it. 00:31:38.754 [2024-10-07 09:52:38.157727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.754 [2024-10-07 09:52:38.157784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.754 [2024-10-07 09:52:38.157800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.754 [2024-10-07 09:52:38.157808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.754 [2024-10-07 09:52:38.157814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.754 [2024-10-07 09:52:38.157828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-10-07 09:52:38.167695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.754 [2024-10-07 09:52:38.167765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.754 [2024-10-07 09:52:38.167779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.754 [2024-10-07 09:52:38.167785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.754 [2024-10-07 09:52:38.167792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.754 [2024-10-07 09:52:38.167806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-10-07 09:52:38.177657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.754 [2024-10-07 09:52:38.177707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.754 [2024-10-07 09:52:38.177720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.754 [2024-10-07 09:52:38.177728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.754 [2024-10-07 09:52:38.177734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.754 [2024-10-07 09:52:38.177748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-10-07 09:52:38.187809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.754 [2024-10-07 09:52:38.187896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.754 [2024-10-07 09:52:38.187910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.754 [2024-10-07 09:52:38.187917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.754 [2024-10-07 09:52:38.187923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.754 [2024-10-07 09:52:38.187937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-10-07 09:52:38.197836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.754 [2024-10-07 09:52:38.197891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.754 [2024-10-07 09:52:38.197905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.754 [2024-10-07 09:52:38.197912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.754 [2024-10-07 09:52:38.197922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.754 [2024-10-07 09:52:38.197936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-10-07 09:52:38.207802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.754 [2024-10-07 09:52:38.207858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.754 [2024-10-07 09:52:38.207871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.754 [2024-10-07 09:52:38.207878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.754 [2024-10-07 09:52:38.207884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.754 [2024-10-07 09:52:38.207898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-10-07 09:52:38.217746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.754 [2024-10-07 09:52:38.217797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.754 [2024-10-07 09:52:38.217811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.754 [2024-10-07 09:52:38.217818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.754 [2024-10-07 09:52:38.217824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.754 [2024-10-07 09:52:38.217838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-10-07 09:52:38.227781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.754 [2024-10-07 09:52:38.227836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.754 [2024-10-07 09:52:38.227849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.754 [2024-10-07 09:52:38.227856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.754 [2024-10-07 09:52:38.227863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.754 [2024-10-07 09:52:38.227877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-10-07 09:52:38.237959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.754 [2024-10-07 09:52:38.238010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.754 [2024-10-07 09:52:38.238023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.754 [2024-10-07 09:52:38.238030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.754 [2024-10-07 09:52:38.238036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.754 [2024-10-07 09:52:38.238050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.754 qpair failed and we were unable to recover it. 00:31:38.754 [2024-10-07 09:52:38.247941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.754 [2024-10-07 09:52:38.248023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.754 [2024-10-07 09:52:38.248037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.248044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.248050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.248064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.257903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.257957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.257985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.257992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.257999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.258020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.267993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.268048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.268062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.268069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.268075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.268090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.277961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.278017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.278030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.278037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.278044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.278058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.287919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.287994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.288007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.288014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.288024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.288039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.298084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.298140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.298154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.298161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.298168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.298182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.308163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.308246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.308259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.308266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.308273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.308287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.318153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.318207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.318221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.318228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.318235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.318249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.328193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.328256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.328269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.328276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.328283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.328297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.338214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.338264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.338277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.338285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.338291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.338305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.348236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.348291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.348304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.348311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.348317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.348331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.358284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.358336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.358350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.358357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.358363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.358377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.368266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.368312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.368326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.368332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.368339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.368353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.755 [2024-10-07 09:52:38.378188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.755 [2024-10-07 09:52:38.378246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.755 [2024-10-07 09:52:38.378259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.755 [2024-10-07 09:52:38.378269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.755 [2024-10-07 09:52:38.378276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.755 [2024-10-07 09:52:38.378290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.755 qpair failed and we were unable to recover it. 00:31:38.756 [2024-10-07 09:52:38.388346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.756 [2024-10-07 09:52:38.388398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.756 [2024-10-07 09:52:38.388412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.756 [2024-10-07 09:52:38.388419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.756 [2024-10-07 09:52:38.388426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.756 [2024-10-07 09:52:38.388440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-10-07 09:52:38.398391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.756 [2024-10-07 09:52:38.398457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.756 [2024-10-07 09:52:38.398482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.756 [2024-10-07 09:52:38.398491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.756 [2024-10-07 09:52:38.398498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.756 [2024-10-07 09:52:38.398517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.756 qpair failed and we were unable to recover it. 00:31:38.756 [2024-10-07 09:52:38.408361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.756 [2024-10-07 09:52:38.408441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.756 [2024-10-07 09:52:38.408457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.756 [2024-10-07 09:52:38.408465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.756 [2024-10-07 09:52:38.408472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:38.756 [2024-10-07 09:52:38.408489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:38.756 qpair failed and we were unable to recover it. 00:31:39.018 [2024-10-07 09:52:38.418306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.018 [2024-10-07 09:52:38.418355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.018 [2024-10-07 09:52:38.418370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.018 [2024-10-07 09:52:38.418377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.018 [2024-10-07 09:52:38.418384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.018 [2024-10-07 09:52:38.418399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.018 qpair failed and we were unable to recover it. 00:31:39.018 [2024-10-07 09:52:38.428344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.018 [2024-10-07 09:52:38.428406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.018 [2024-10-07 09:52:38.428420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.018 [2024-10-07 09:52:38.428427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.018 [2024-10-07 09:52:38.428433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.018 [2024-10-07 09:52:38.428448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.018 qpair failed and we were unable to recover it. 00:31:39.018 [2024-10-07 09:52:38.438490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.018 [2024-10-07 09:52:38.438543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.018 [2024-10-07 09:52:38.438557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.018 [2024-10-07 09:52:38.438565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.018 [2024-10-07 09:52:38.438571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.018 [2024-10-07 09:52:38.438585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.018 qpair failed and we were unable to recover it. 00:31:39.018 [2024-10-07 09:52:38.448477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.018 [2024-10-07 09:52:38.448524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.018 [2024-10-07 09:52:38.448537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.018 [2024-10-07 09:52:38.448544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.018 [2024-10-07 09:52:38.448551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.018 [2024-10-07 09:52:38.448565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.018 qpair failed and we were unable to recover it. 00:31:39.018 [2024-10-07 09:52:38.458546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.018 [2024-10-07 09:52:38.458620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.018 [2024-10-07 09:52:38.458634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.018 [2024-10-07 09:52:38.458641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.018 [2024-10-07 09:52:38.458647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.018 [2024-10-07 09:52:38.458662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.018 qpair failed and we were unable to recover it. 00:31:39.018 [2024-10-07 09:52:38.468593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.018 [2024-10-07 09:52:38.468653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.018 [2024-10-07 09:52:38.468670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.018 [2024-10-07 09:52:38.468677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.018 [2024-10-07 09:52:38.468683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.018 [2024-10-07 09:52:38.468698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.018 qpair failed and we were unable to recover it. 00:31:39.018 [2024-10-07 09:52:38.478615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.018 [2024-10-07 09:52:38.478677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.018 [2024-10-07 09:52:38.478691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.018 [2024-10-07 09:52:38.478698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.018 [2024-10-07 09:52:38.478704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.018 [2024-10-07 09:52:38.478719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.018 qpair failed and we were unable to recover it. 00:31:39.018 [2024-10-07 09:52:38.488588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.018 [2024-10-07 09:52:38.488674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.018 [2024-10-07 09:52:38.488689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.018 [2024-10-07 09:52:38.488696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.018 [2024-10-07 09:52:38.488703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.018 [2024-10-07 09:52:38.488719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.018 qpair failed and we were unable to recover it. 00:31:39.018 [2024-10-07 09:52:38.498666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.018 [2024-10-07 09:52:38.498716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.018 [2024-10-07 09:52:38.498729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.018 [2024-10-07 09:52:38.498736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.018 [2024-10-07 09:52:38.498743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.018 [2024-10-07 09:52:38.498757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.508708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.508760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.508773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.508780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.508786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.508800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.518715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.518771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.518786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.518793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.518800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.518814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.528626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.528677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.528690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.528697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.528704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.528718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.538781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.538835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.538848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.538855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.538862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.538876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.548772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.548830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.548844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.548851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.548857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.548871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.558857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.558911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.558928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.558935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.558942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.558956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.568693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.568743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.568757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.568764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.568770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.568784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.578791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.578874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.578888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.578895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.578901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.578915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.588793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.588848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.588862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.588869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.588875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.588889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.598946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.599004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.599018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.599025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.599031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.599049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.608903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.608949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.608963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.608970] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.608977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.608991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.619005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.619051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.619065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.619072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.619079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.619093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.628889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.628956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.628970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.019 [2024-10-07 09:52:38.628977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.019 [2024-10-07 09:52:38.628984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.019 [2024-10-07 09:52:38.628999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.019 qpair failed and we were unable to recover it. 00:31:39.019 [2024-10-07 09:52:38.639066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.019 [2024-10-07 09:52:38.639174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.019 [2024-10-07 09:52:38.639188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.020 [2024-10-07 09:52:38.639195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.020 [2024-10-07 09:52:38.639202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.020 [2024-10-07 09:52:38.639216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.020 qpair failed and we were unable to recover it. 00:31:39.020 [2024-10-07 09:52:38.649032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.020 [2024-10-07 09:52:38.649076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.020 [2024-10-07 09:52:38.649093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.020 [2024-10-07 09:52:38.649100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.020 [2024-10-07 09:52:38.649106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.020 [2024-10-07 09:52:38.649121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.020 qpair failed and we were unable to recover it. 00:31:39.020 [2024-10-07 09:52:38.659094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.020 [2024-10-07 09:52:38.659143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.020 [2024-10-07 09:52:38.659156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.020 [2024-10-07 09:52:38.659164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.020 [2024-10-07 09:52:38.659170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.020 [2024-10-07 09:52:38.659184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.020 qpair failed and we were unable to recover it. 00:31:39.020 [2024-10-07 09:52:38.669105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.020 [2024-10-07 09:52:38.669171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.020 [2024-10-07 09:52:38.669185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.020 [2024-10-07 09:52:38.669192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.020 [2024-10-07 09:52:38.669198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.020 [2024-10-07 09:52:38.669212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.020 qpair failed and we were unable to recover it. 00:31:39.283 [2024-10-07 09:52:38.679181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.283 [2024-10-07 09:52:38.679235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.283 [2024-10-07 09:52:38.679248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.283 [2024-10-07 09:52:38.679255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.283 [2024-10-07 09:52:38.679262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.283 [2024-10-07 09:52:38.679276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.283 qpair failed and we were unable to recover it. 00:31:39.283 [2024-10-07 09:52:38.689122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.283 [2024-10-07 09:52:38.689168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.283 [2024-10-07 09:52:38.689182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.283 [2024-10-07 09:52:38.689189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.283 [2024-10-07 09:52:38.689199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.283 [2024-10-07 09:52:38.689214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.283 qpair failed and we were unable to recover it. 00:31:39.283 [2024-10-07 09:52:38.699245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.283 [2024-10-07 09:52:38.699322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.283 [2024-10-07 09:52:38.699336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.283 [2024-10-07 09:52:38.699343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.283 [2024-10-07 09:52:38.699349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.283 [2024-10-07 09:52:38.699363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.283 qpair failed and we were unable to recover it. 00:31:39.283 [2024-10-07 09:52:38.709240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.283 [2024-10-07 09:52:38.709338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.283 [2024-10-07 09:52:38.709352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.283 [2024-10-07 09:52:38.709359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.283 [2024-10-07 09:52:38.709366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.283 [2024-10-07 09:52:38.709380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.283 qpair failed and we were unable to recover it. 00:31:39.283 [2024-10-07 09:52:38.719275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.283 [2024-10-07 09:52:38.719347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.283 [2024-10-07 09:52:38.719361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.283 [2024-10-07 09:52:38.719368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.283 [2024-10-07 09:52:38.719374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.283 [2024-10-07 09:52:38.719388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.283 qpair failed and we were unable to recover it. 00:31:39.283 [2024-10-07 09:52:38.729124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.283 [2024-10-07 09:52:38.729175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.283 [2024-10-07 09:52:38.729188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.283 [2024-10-07 09:52:38.729195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.283 [2024-10-07 09:52:38.729201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.283 [2024-10-07 09:52:38.729215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.283 qpair failed and we were unable to recover it. 00:31:39.283 [2024-10-07 09:52:38.739322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.283 [2024-10-07 09:52:38.739372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.283 [2024-10-07 09:52:38.739385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.283 [2024-10-07 09:52:38.739392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.283 [2024-10-07 09:52:38.739399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.283 [2024-10-07 09:52:38.739413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.283 qpair failed and we were unable to recover it. 00:31:39.283 [2024-10-07 09:52:38.749216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.283 [2024-10-07 09:52:38.749273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.283 [2024-10-07 09:52:38.749286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.283 [2024-10-07 09:52:38.749293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.283 [2024-10-07 09:52:38.749300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.283 [2024-10-07 09:52:38.749314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.759388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.759442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.759455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.759462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.759469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.759483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.769270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.769365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.769378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.769385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.769392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.769405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.779416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.779471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.779485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.779492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.779502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.779516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.789451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.789514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.789539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.789548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.789555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.789574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.799479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.799544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.799569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.799578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.799585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.799604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.809340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.809389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.809405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.809412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.809419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.809434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.819536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.819636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.819651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.819658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.819665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.819680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.829573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.829663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.829677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.829684] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.829691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.829705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.839472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.839532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.839545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.839552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.839559] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.839573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.849559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.849605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.849622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.849630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.849637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.849651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.859685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.859738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.859751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.859758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.859765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.859779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.869683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.869736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.869750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.869761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.869768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.869782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.879709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.879762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.879776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.284 [2024-10-07 09:52:38.879783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.284 [2024-10-07 09:52:38.879790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.284 [2024-10-07 09:52:38.879805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.284 qpair failed and we were unable to recover it. 00:31:39.284 [2024-10-07 09:52:38.889694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.284 [2024-10-07 09:52:38.889740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.284 [2024-10-07 09:52:38.889753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.285 [2024-10-07 09:52:38.889760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.285 [2024-10-07 09:52:38.889766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.285 [2024-10-07 09:52:38.889781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.285 qpair failed and we were unable to recover it. 00:31:39.285 [2024-10-07 09:52:38.899767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.285 [2024-10-07 09:52:38.899817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.285 [2024-10-07 09:52:38.899830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.285 [2024-10-07 09:52:38.899838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.285 [2024-10-07 09:52:38.899844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.285 [2024-10-07 09:52:38.899858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.285 qpair failed and we were unable to recover it. 00:31:39.285 [2024-10-07 09:52:38.909827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.285 [2024-10-07 09:52:38.909885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.285 [2024-10-07 09:52:38.909899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.285 [2024-10-07 09:52:38.909907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.285 [2024-10-07 09:52:38.909913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.285 [2024-10-07 09:52:38.909930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.285 qpair failed and we were unable to recover it. 00:31:39.285 [2024-10-07 09:52:38.919885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.285 [2024-10-07 09:52:38.919937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.285 [2024-10-07 09:52:38.919952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.285 [2024-10-07 09:52:38.919959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.285 [2024-10-07 09:52:38.919965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.285 [2024-10-07 09:52:38.919979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.285 qpair failed and we were unable to recover it. 00:31:39.285 [2024-10-07 09:52:38.929811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.285 [2024-10-07 09:52:38.929858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.285 [2024-10-07 09:52:38.929871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.285 [2024-10-07 09:52:38.929879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.285 [2024-10-07 09:52:38.929885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.285 [2024-10-07 09:52:38.929899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.285 qpair failed and we were unable to recover it. 00:31:39.285 [2024-10-07 09:52:38.939878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.285 [2024-10-07 09:52:38.939929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.285 [2024-10-07 09:52:38.939943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.285 [2024-10-07 09:52:38.939950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.285 [2024-10-07 09:52:38.939956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.285 [2024-10-07 09:52:38.939970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.285 qpair failed and we were unable to recover it. 00:31:39.548 [2024-10-07 09:52:38.949844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.548 [2024-10-07 09:52:38.949901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.548 [2024-10-07 09:52:38.949914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.548 [2024-10-07 09:52:38.949921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.548 [2024-10-07 09:52:38.949928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.548 [2024-10-07 09:52:38.949942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.548 qpair failed and we were unable to recover it. 00:31:39.548 [2024-10-07 09:52:38.959862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.548 [2024-10-07 09:52:38.959917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.548 [2024-10-07 09:52:38.959930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.548 [2024-10-07 09:52:38.959944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.548 [2024-10-07 09:52:38.959951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.548 [2024-10-07 09:52:38.959965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.548 qpair failed and we were unable to recover it. 00:31:39.548 [2024-10-07 09:52:38.969941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.548 [2024-10-07 09:52:38.969988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.548 [2024-10-07 09:52:38.970001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.548 [2024-10-07 09:52:38.970008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.548 [2024-10-07 09:52:38.970014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.548 [2024-10-07 09:52:38.970028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.548 qpair failed and we were unable to recover it. 00:31:39.548 [2024-10-07 09:52:38.979962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.548 [2024-10-07 09:52:38.980047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.548 [2024-10-07 09:52:38.980061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.548 [2024-10-07 09:52:38.980068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.548 [2024-10-07 09:52:38.980075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.548 [2024-10-07 09:52:38.980089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.548 qpair failed and we were unable to recover it. 00:31:39.548 [2024-10-07 09:52:38.989948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.548 [2024-10-07 09:52:38.990009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.548 [2024-10-07 09:52:38.990023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.548 [2024-10-07 09:52:38.990030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.548 [2024-10-07 09:52:38.990036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.548 [2024-10-07 09:52:38.990051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.548 qpair failed and we were unable to recover it. 00:31:39.548 [2024-10-07 09:52:39.000079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.548 [2024-10-07 09:52:39.000134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.548 [2024-10-07 09:52:39.000148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.548 [2024-10-07 09:52:39.000156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.548 [2024-10-07 09:52:39.000162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.548 [2024-10-07 09:52:39.000176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.548 qpair failed and we were unable to recover it. 00:31:39.548 [2024-10-07 09:52:39.009994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.548 [2024-10-07 09:52:39.010079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.548 [2024-10-07 09:52:39.010092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.548 [2024-10-07 09:52:39.010100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.548 [2024-10-07 09:52:39.010106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.548 [2024-10-07 09:52:39.010120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.548 qpair failed and we were unable to recover it. 00:31:39.548 [2024-10-07 09:52:39.020084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.548 [2024-10-07 09:52:39.020143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.548 [2024-10-07 09:52:39.020156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.548 [2024-10-07 09:52:39.020163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.020170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.020184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.030144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.030199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.030213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.030220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.030227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.030241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.040173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.040227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.040241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.040248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.040254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.040268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.050034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.050082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.050099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.050106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.050112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.050126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.060130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.060182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.060196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.060203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.060209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.060224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.070240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.070294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.070308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.070315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.070322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.070335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.080242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.080302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.080315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.080323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.080329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.080343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.090249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.090295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.090308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.090315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.090322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.090339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.100321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.100425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.100439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.100446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.100453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.100467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.110349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.110400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.110414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.110421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.110427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.110441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.120380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.120430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.120444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.120451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.120458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.120472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.130354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.130441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.130454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.130461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.130468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.130482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.140415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.140469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.140487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.140494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.140500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.140514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.150447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.549 [2024-10-07 09:52:39.150504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.549 [2024-10-07 09:52:39.150517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.549 [2024-10-07 09:52:39.150525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.549 [2024-10-07 09:52:39.150531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.549 [2024-10-07 09:52:39.150545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.549 qpair failed and we were unable to recover it. 00:31:39.549 [2024-10-07 09:52:39.160478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.550 [2024-10-07 09:52:39.160535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.550 [2024-10-07 09:52:39.160549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.550 [2024-10-07 09:52:39.160557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.550 [2024-10-07 09:52:39.160563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.550 [2024-10-07 09:52:39.160577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.550 qpair failed and we were unable to recover it. 00:31:39.550 [2024-10-07 09:52:39.170472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.550 [2024-10-07 09:52:39.170542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.550 [2024-10-07 09:52:39.170556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.550 [2024-10-07 09:52:39.170563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.550 [2024-10-07 09:52:39.170569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.550 [2024-10-07 09:52:39.170584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.550 qpair failed and we were unable to recover it. 00:31:39.550 [2024-10-07 09:52:39.180410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.550 [2024-10-07 09:52:39.180479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.550 [2024-10-07 09:52:39.180493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.550 [2024-10-07 09:52:39.180500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.550 [2024-10-07 09:52:39.180506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.550 [2024-10-07 09:52:39.180524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.550 qpair failed and we were unable to recover it. 00:31:39.550 [2024-10-07 09:52:39.190568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.550 [2024-10-07 09:52:39.190629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.550 [2024-10-07 09:52:39.190643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.550 [2024-10-07 09:52:39.190650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.550 [2024-10-07 09:52:39.190656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.550 [2024-10-07 09:52:39.190671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.550 qpair failed and we were unable to recover it. 00:31:39.550 [2024-10-07 09:52:39.200619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.550 [2024-10-07 09:52:39.200703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.550 [2024-10-07 09:52:39.200716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.550 [2024-10-07 09:52:39.200723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.550 [2024-10-07 09:52:39.200730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.550 [2024-10-07 09:52:39.200744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.550 qpair failed and we were unable to recover it. 00:31:39.812 [2024-10-07 09:52:39.210449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.812 [2024-10-07 09:52:39.210523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.812 [2024-10-07 09:52:39.210536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.812 [2024-10-07 09:52:39.210543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.812 [2024-10-07 09:52:39.210549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.812 [2024-10-07 09:52:39.210563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.812 qpair failed and we were unable to recover it. 00:31:39.812 [2024-10-07 09:52:39.220662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.812 [2024-10-07 09:52:39.220715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.812 [2024-10-07 09:52:39.220730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.812 [2024-10-07 09:52:39.220737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.812 [2024-10-07 09:52:39.220744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.812 [2024-10-07 09:52:39.220758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.812 qpair failed and we were unable to recover it. 00:31:39.812 [2024-10-07 09:52:39.230700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.812 [2024-10-07 09:52:39.230768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.812 [2024-10-07 09:52:39.230782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.812 [2024-10-07 09:52:39.230789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.812 [2024-10-07 09:52:39.230796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.812 [2024-10-07 09:52:39.230810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.812 qpair failed and we were unable to recover it. 00:31:39.812 [2024-10-07 09:52:39.240751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.812 [2024-10-07 09:52:39.240806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.812 [2024-10-07 09:52:39.240819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.240826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.240832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.240847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.250661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.250702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.250715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.250722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.250729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.250743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.260651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.260744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.260758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.260765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.260772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.260786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.270791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.270860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.270874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.270881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.270891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.270906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.280834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.280888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.280901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.280908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.280915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.280929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.290820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.290868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.290882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.290889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.290896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.290910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.300893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.300978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.300992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.300999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.301005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.301019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.310910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.310964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.310978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.310985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.310991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.311006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.320914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.320993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.321007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.321014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.321021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.321035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.330895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.330941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.330954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.330961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.330967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.330981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.340989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.341040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.341054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.341061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.341067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.341082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.351012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.351071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.351086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.351097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.351103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.351118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.361005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.361050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.361063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.361074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.361080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.361095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.813 [2024-10-07 09:52:39.371020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.813 [2024-10-07 09:52:39.371065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.813 [2024-10-07 09:52:39.371078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.813 [2024-10-07 09:52:39.371086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.813 [2024-10-07 09:52:39.371092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.813 [2024-10-07 09:52:39.371106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.813 qpair failed and we were unable to recover it. 00:31:39.814 [2024-10-07 09:52:39.381058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.814 [2024-10-07 09:52:39.381118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.814 [2024-10-07 09:52:39.381131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.814 [2024-10-07 09:52:39.381138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.814 [2024-10-07 09:52:39.381144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.814 [2024-10-07 09:52:39.381158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.814 qpair failed and we were unable to recover it. 00:31:39.814 [2024-10-07 09:52:39.391106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.814 [2024-10-07 09:52:39.391158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.814 [2024-10-07 09:52:39.391172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.814 [2024-10-07 09:52:39.391179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.814 [2024-10-07 09:52:39.391185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.814 [2024-10-07 09:52:39.391200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.814 qpair failed and we were unable to recover it. 00:31:39.814 [2024-10-07 09:52:39.401077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.814 [2024-10-07 09:52:39.401129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.814 [2024-10-07 09:52:39.401142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.814 [2024-10-07 09:52:39.401149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.814 [2024-10-07 09:52:39.401155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.814 [2024-10-07 09:52:39.401169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.814 qpair failed and we were unable to recover it. 00:31:39.814 [2024-10-07 09:52:39.411126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.814 [2024-10-07 09:52:39.411215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.814 [2024-10-07 09:52:39.411229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.814 [2024-10-07 09:52:39.411236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.814 [2024-10-07 09:52:39.411243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.814 [2024-10-07 09:52:39.411257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.814 qpair failed and we were unable to recover it. 00:31:39.814 [2024-10-07 09:52:39.421109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.814 [2024-10-07 09:52:39.421154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.814 [2024-10-07 09:52:39.421167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.814 [2024-10-07 09:52:39.421174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.814 [2024-10-07 09:52:39.421181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.814 [2024-10-07 09:52:39.421195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.814 qpair failed and we were unable to recover it. 00:31:39.814 [2024-10-07 09:52:39.431119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.814 [2024-10-07 09:52:39.431175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.814 [2024-10-07 09:52:39.431188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.814 [2024-10-07 09:52:39.431196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.814 [2024-10-07 09:52:39.431202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.814 [2024-10-07 09:52:39.431216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.814 qpair failed and we were unable to recover it. 00:31:39.814 [2024-10-07 09:52:39.441238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.814 [2024-10-07 09:52:39.441292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.814 [2024-10-07 09:52:39.441305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.814 [2024-10-07 09:52:39.441312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.814 [2024-10-07 09:52:39.441319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.814 [2024-10-07 09:52:39.441333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.814 qpair failed and we were unable to recover it. 00:31:39.814 [2024-10-07 09:52:39.451279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.814 [2024-10-07 09:52:39.451328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.814 [2024-10-07 09:52:39.451341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.814 [2024-10-07 09:52:39.451352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.814 [2024-10-07 09:52:39.451358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.814 [2024-10-07 09:52:39.451372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.814 qpair failed and we were unable to recover it. 00:31:39.814 [2024-10-07 09:52:39.461261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.814 [2024-10-07 09:52:39.461310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.814 [2024-10-07 09:52:39.461323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.814 [2024-10-07 09:52:39.461330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.814 [2024-10-07 09:52:39.461337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.814 [2024-10-07 09:52:39.461351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.814 qpair failed and we were unable to recover it. 00:31:39.814 [2024-10-07 09:52:39.471301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.814 [2024-10-07 09:52:39.471360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.814 [2024-10-07 09:52:39.471373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.814 [2024-10-07 09:52:39.471380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.814 [2024-10-07 09:52:39.471387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:39.814 [2024-10-07 09:52:39.471401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:39.814 qpair failed and we were unable to recover it. 00:31:40.076 [2024-10-07 09:52:39.481293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.481346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.481359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.481366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.481372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.481387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.491224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.491276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.491290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.491297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.491303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.491317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.501360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.501423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.501448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.501456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.501464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.501483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.511458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.511511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.511527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.511534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.511541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.511556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.521544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.521604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.521621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.521629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.521635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.521650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.531489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.531538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.531551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.531558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.531565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.531579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.541523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.541596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.541613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.541638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.541645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.541661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.551608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.551688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.551702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.551709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.551715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.551730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.561550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.561600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.561613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.561624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.561631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.561646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.571575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.571624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.571638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.571646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.571653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.571668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.581564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.581611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.581637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.581645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.581651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.581669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.591659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.591714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.591729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.591736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.591742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.591758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.077 [2024-10-07 09:52:39.601657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.077 [2024-10-07 09:52:39.601715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.077 [2024-10-07 09:52:39.601729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.077 [2024-10-07 09:52:39.601736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.077 [2024-10-07 09:52:39.601742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.077 [2024-10-07 09:52:39.601757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.077 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.611550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.611608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.611626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.611633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.611639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.611654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.621712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.621758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.621771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.621778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.621785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.621799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.631770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.631825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.631842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.631849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.631855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.631869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.641736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.641785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.641799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.641806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.641813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.641827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.651663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.651708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.651721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.651728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.651734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.651748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.661812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.661865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.661878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.661885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.661892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.661906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.671870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.671923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.671936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.671944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.671950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.671968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.681761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.681813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.681827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.681834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.681840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.681854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.691878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.691932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.691945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.691953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.691959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.691973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.701895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.701943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.701956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.701964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.701970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.701984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.712004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.712056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.712070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.712077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.712084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.712098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.722005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.722105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.722125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.722132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.722139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.722153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.078 [2024-10-07 09:52:39.731874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.078 [2024-10-07 09:52:39.731932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.078 [2024-10-07 09:52:39.731946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.078 [2024-10-07 09:52:39.731953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.078 [2024-10-07 09:52:39.731960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.078 [2024-10-07 09:52:39.731974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.078 qpair failed and we were unable to recover it. 00:31:40.342 [2024-10-07 09:52:39.742049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.342 [2024-10-07 09:52:39.742094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.342 [2024-10-07 09:52:39.742108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.342 [2024-10-07 09:52:39.742115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.342 [2024-10-07 09:52:39.742122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.342 [2024-10-07 09:52:39.742135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.342 qpair failed and we were unable to recover it. 00:31:40.342 [2024-10-07 09:52:39.752109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.342 [2024-10-07 09:52:39.752163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.342 [2024-10-07 09:52:39.752176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.752183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.752190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.752203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.762138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.762191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.762204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.762211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.762221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.762235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.772132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.772210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.772223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.772230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.772236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.772250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.782129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.782178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.782191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.782198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.782205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.782219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.792175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.792229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.792241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.792248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.792255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.792269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.802210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.802261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.802273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.802281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.802287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.802301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.812220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.812339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.812353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.812360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.812367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.812381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.822253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.822302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.822315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.822322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.822329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.822343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.832329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.832416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.832429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.832436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.832442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.832456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.842323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.842384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.842397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.842404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.842410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.842424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.852213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.852260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.852275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.852282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.852292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.852307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.862340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.862385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.862399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.862407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.862413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.862427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.872448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.872505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.872519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.872526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.872532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.872547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.343 qpair failed and we were unable to recover it. 00:31:40.343 [2024-10-07 09:52:39.882391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.343 [2024-10-07 09:52:39.882481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.343 [2024-10-07 09:52:39.882494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.343 [2024-10-07 09:52:39.882502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.343 [2024-10-07 09:52:39.882508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.343 [2024-10-07 09:52:39.882522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.892425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.892466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.892479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.892486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.892492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.892506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.902453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.902498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.902512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.902519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.902525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.902539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.912556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.912621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.912635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.912642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.912649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.912663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.922514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.922563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.922576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.922583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.922590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.922603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.932530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.932574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.932587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.932594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.932600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.932614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.942575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.942668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.942681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.942692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.942698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.942712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.952691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.952747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.952760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.952767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.952774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.952788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.962654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.962705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.962719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.962726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.962733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.962747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.972698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.972790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.972803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.972810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.972817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.972831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.982727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.982801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.982814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.982821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.982828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.982842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:39.992756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:39.992812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:39.992826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:39.992833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:39.992839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:39.992853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.344 [2024-10-07 09:52:40.002832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.344 [2024-10-07 09:52:40.002895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.344 [2024-10-07 09:52:40.002910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.344 [2024-10-07 09:52:40.002917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.344 [2024-10-07 09:52:40.002923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.344 [2024-10-07 09:52:40.002938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.344 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.012868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.012923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.012937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.012944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.012950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.012965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.022864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.022929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.022943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.022950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.022957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.022971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.032912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.032971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.032988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.032996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.033003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.033017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.042917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.042970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.042983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.042990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.042997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.043010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.052805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.052858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.052871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.052878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.052884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.052898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.062783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.062833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.062846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.062853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.062860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.062874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.072987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.073046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.073060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.073067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.073073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.073088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.082968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.083019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.083033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.083040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.083046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.083061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.093016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.093065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.093079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.093086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.093093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.093107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.102874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.102918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.102931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.102939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.102945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.102959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.113101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.113169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.113183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.113190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.113197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.113211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.123095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.123147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.608 [2024-10-07 09:52:40.123164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.608 [2024-10-07 09:52:40.123172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.608 [2024-10-07 09:52:40.123178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.608 [2024-10-07 09:52:40.123193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.608 qpair failed and we were unable to recover it. 00:31:40.608 [2024-10-07 09:52:40.133182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.608 [2024-10-07 09:52:40.133245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.133258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.133265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.133271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.133286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.143187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.143250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.143264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.143271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.143277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.143291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.153222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.153277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.153291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.153298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.153304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.153318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.163212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.163267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.163280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.163287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.163294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.163311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.173226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.173276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.173290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.173297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.173303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.173317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.183260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.183320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.183333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.183340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.183347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.183361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.193348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.193404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.193417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.193425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.193431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.193445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.203302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.203353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.203366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.203373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.203380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.203394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.213329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.213377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.213393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.213400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.213407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.213421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.223311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.223406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.223419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.223427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.223434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.223447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.233436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.233493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.233506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.233514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.233520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.233535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.243422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.243474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.243488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.243495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.243502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.243516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.253431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.253481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.253494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.253501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.253511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.609 [2024-10-07 09:52:40.253525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.609 qpair failed and we were unable to recover it. 00:31:40.609 [2024-10-07 09:52:40.263455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.609 [2024-10-07 09:52:40.263506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.609 [2024-10-07 09:52:40.263519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.609 [2024-10-07 09:52:40.263526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.609 [2024-10-07 09:52:40.263533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.610 [2024-10-07 09:52:40.263547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.610 qpair failed and we were unable to recover it. 00:31:40.872 [2024-10-07 09:52:40.273518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.872 [2024-10-07 09:52:40.273572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.872 [2024-10-07 09:52:40.273585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.872 [2024-10-07 09:52:40.273592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.872 [2024-10-07 09:52:40.273599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.872 [2024-10-07 09:52:40.273613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.872 qpair failed and we were unable to recover it. 00:31:40.872 [2024-10-07 09:52:40.283519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.872 [2024-10-07 09:52:40.283566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.872 [2024-10-07 09:52:40.283579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.872 [2024-10-07 09:52:40.283587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.283593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.283607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.293547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.293592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.293605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.293612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.293623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.293637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.303580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.303633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.303647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.303654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.303660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.303675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.313646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.313700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.313714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.313721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.313728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.313742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.323651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.323705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.323718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.323725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.323731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.323745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.333639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.333739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.333754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.333761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.333767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.333786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.343549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.343603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.343620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.343628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.343637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.343652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.353774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.353840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.353853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.353860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.353866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.353881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.363796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.363848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.363861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.363868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.363875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.363888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.373772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.373835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.373848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.373855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.373862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.373876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.383786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.383835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.383849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.383855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.383862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.383876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.393870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.393924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.393937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.393945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.393951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.393965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.403862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.403909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.403923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.403929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.403936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.403949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.873 qpair failed and we were unable to recover it. 00:31:40.873 [2024-10-07 09:52:40.413880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.873 [2024-10-07 09:52:40.413966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.873 [2024-10-07 09:52:40.413980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.873 [2024-10-07 09:52:40.413987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.873 [2024-10-07 09:52:40.413994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.873 [2024-10-07 09:52:40.414008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.423883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.423936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.423949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.423956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.423963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.423976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.433870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.433942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.433955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.433965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.433971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.433986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.443925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.443974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.443988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.443995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.444001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.444015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.453847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.453895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.453909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.453916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.453923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.453937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.464013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.464064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.464077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.464084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.464090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.464104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.474104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.474156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.474169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.474177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.474183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.474197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.483951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.484003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.484016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.484023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.484029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.484043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.493958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.494006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.494021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.494028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.494035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.494049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.504112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.504161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.504175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.504182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.504188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.504203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.514199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.514250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.514264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.514271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.514277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.514291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:40.874 [2024-10-07 09:52:40.524190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.874 [2024-10-07 09:52:40.524243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.874 [2024-10-07 09:52:40.524257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.874 [2024-10-07 09:52:40.524267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.874 [2024-10-07 09:52:40.524273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:40.874 [2024-10-07 09:52:40.524287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.874 qpair failed and we were unable to recover it. 00:31:41.136 [2024-10-07 09:52:40.534199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.136 [2024-10-07 09:52:40.534243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.136 [2024-10-07 09:52:40.534257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.136 [2024-10-07 09:52:40.534264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.136 [2024-10-07 09:52:40.534270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.136 [2024-10-07 09:52:40.534284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.136 qpair failed and we were unable to recover it. 00:31:41.136 [2024-10-07 09:52:40.544206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.136 [2024-10-07 09:52:40.544269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.136 [2024-10-07 09:52:40.544282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.544289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.544295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.544310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.554270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.554322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.554335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.554342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.554348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.554362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.564173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.564224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.564236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.564244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.564250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.564264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.574306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.574355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.574369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.574376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.574383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.574397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.584341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.584414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.584428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.584435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.584441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.584455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.594411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.594465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.594479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.594486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.594492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.594506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.604387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.604437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.604451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.604458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.604465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.604479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.614445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.614497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.614514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.614521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.614528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.614542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.624305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.624355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.624368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.624375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.624382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.624395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.634493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.634545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.634559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.634566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.634573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.634586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.644504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.644551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.644564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.644571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.644578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.644592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.654526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.654571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.654585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.654592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.654598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.654620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.664540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.664590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.664603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.664610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.137 [2024-10-07 09:52:40.664620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.137 [2024-10-07 09:52:40.664635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.137 qpair failed and we were unable to recover it. 00:31:41.137 [2024-10-07 09:52:40.674476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.137 [2024-10-07 09:52:40.674532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.137 [2024-10-07 09:52:40.674545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.137 [2024-10-07 09:52:40.674552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.674558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.674572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.684606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.684664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.684678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.684686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.684692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.684706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.694607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.694659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.694672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.694679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.694686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.694699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.704647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.704699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.704715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.704722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.704729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.704743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.714723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.714831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.714845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.714852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.714859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.714873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.724712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.724760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.724773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.724780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.724787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.724801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.734603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.734658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.734671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.734678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.734685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.734699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.744771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.744822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.744835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.744842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.744852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.744866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.754847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.754901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.754914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.754921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.754927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.754941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.764847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.764892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.764906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.764913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.764920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.764933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.774807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.774859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.774872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.774879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.774886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.774900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.784877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.784924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.784937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.784944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.784951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.784965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.138 [2024-10-07 09:52:40.794962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.138 [2024-10-07 09:52:40.795022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.138 [2024-10-07 09:52:40.795036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.138 [2024-10-07 09:52:40.795043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.138 [2024-10-07 09:52:40.795049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.138 [2024-10-07 09:52:40.795063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.138 qpair failed and we were unable to recover it. 00:31:41.400 [2024-10-07 09:52:40.804940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.401 [2024-10-07 09:52:40.804994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.401 [2024-10-07 09:52:40.805007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.401 [2024-10-07 09:52:40.805014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.401 [2024-10-07 09:52:40.805021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.401 [2024-10-07 09:52:40.805034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.401 qpair failed and we were unable to recover it. 00:31:41.401 [2024-10-07 09:52:40.814944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.401 [2024-10-07 09:52:40.815020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.401 [2024-10-07 09:52:40.815033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.401 [2024-10-07 09:52:40.815040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.401 [2024-10-07 09:52:40.815047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.401 [2024-10-07 09:52:40.815062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.401 qpair failed and we were unable to recover it. 00:31:41.401 [2024-10-07 09:52:40.824975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.401 [2024-10-07 09:52:40.825029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.401 [2024-10-07 09:52:40.825042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.401 [2024-10-07 09:52:40.825049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.401 [2024-10-07 09:52:40.825055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.401 [2024-10-07 09:52:40.825069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.401 qpair failed and we were unable to recover it. 00:31:41.401 [2024-10-07 09:52:40.834950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.401 [2024-10-07 09:52:40.835002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.401 [2024-10-07 09:52:40.835015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.401 [2024-10-07 09:52:40.835023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.401 [2024-10-07 09:52:40.835033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.401 [2024-10-07 09:52:40.835048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.401 qpair failed and we were unable to recover it. 00:31:41.401 [2024-10-07 09:52:40.845086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.401 [2024-10-07 09:52:40.845137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.401 [2024-10-07 09:52:40.845150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.401 [2024-10-07 09:52:40.845157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.401 [2024-10-07 09:52:40.845163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.401 [2024-10-07 09:52:40.845177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.401 qpair failed and we were unable to recover it. 00:31:41.401 [2024-10-07 09:52:40.855063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.401 [2024-10-07 09:52:40.855111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.401 [2024-10-07 09:52:40.855124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.401 [2024-10-07 09:52:40.855131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.401 [2024-10-07 09:52:40.855138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.401 [2024-10-07 09:52:40.855152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.401 qpair failed and we were unable to recover it. 00:31:41.401 [2024-10-07 09:52:40.865089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.401 [2024-10-07 09:52:40.865134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.401 [2024-10-07 09:52:40.865148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.401 [2024-10-07 09:52:40.865155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.401 [2024-10-07 09:52:40.865162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.401 [2024-10-07 09:52:40.865176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.401 qpair failed and we were unable to recover it. 00:31:41.401 [2024-10-07 09:52:40.875163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.401 [2024-10-07 09:52:40.875218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.401 [2024-10-07 09:52:40.875231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.401 [2024-10-07 09:52:40.875239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.401 [2024-10-07 09:52:40.875245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.401 [2024-10-07 09:52:40.875259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.401 qpair failed and we were unable to recover it. 00:31:41.401 [2024-10-07 09:52:40.885164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.401 [2024-10-07 09:52:40.885215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.401 [2024-10-07 09:52:40.885229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.401 [2024-10-07 09:52:40.885236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.401 [2024-10-07 09:52:40.885242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.401 [2024-10-07 09:52:40.885256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.401 qpair failed and we were unable to recover it. 00:31:41.401 [2024-10-07 09:52:40.895039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.401 [2024-10-07 09:52:40.895091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.401 [2024-10-07 09:52:40.895107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.401 [2024-10-07 09:52:40.895114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.401 [2024-10-07 09:52:40.895121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.401 [2024-10-07 09:52:40.895138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.401 qpair failed and we were unable to recover it. 00:31:41.401 [2024-10-07 09:52:40.905180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:40.905227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:40.905242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:40.905249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.402 [2024-10-07 09:52:40.905256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.402 [2024-10-07 09:52:40.905270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.402 qpair failed and we were unable to recover it. 00:31:41.402 [2024-10-07 09:52:40.915266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:40.915321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:40.915335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:40.915343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.402 [2024-10-07 09:52:40.915349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.402 [2024-10-07 09:52:40.915363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.402 qpair failed and we were unable to recover it. 00:31:41.402 [2024-10-07 09:52:40.925278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:40.925327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:40.925341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:40.925352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.402 [2024-10-07 09:52:40.925358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.402 [2024-10-07 09:52:40.925372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.402 qpair failed and we were unable to recover it. 00:31:41.402 [2024-10-07 09:52:40.935275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:40.935372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:40.935396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:40.935403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.402 [2024-10-07 09:52:40.935410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.402 [2024-10-07 09:52:40.935429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.402 qpair failed and we were unable to recover it. 00:31:41.402 [2024-10-07 09:52:40.945328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:40.945377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:40.945392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:40.945399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.402 [2024-10-07 09:52:40.945406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.402 [2024-10-07 09:52:40.945420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.402 qpair failed and we were unable to recover it. 00:31:41.402 [2024-10-07 09:52:40.955396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:40.955450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:40.955463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:40.955470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.402 [2024-10-07 09:52:40.955477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.402 [2024-10-07 09:52:40.955491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.402 qpair failed and we were unable to recover it. 00:31:41.402 [2024-10-07 09:52:40.965386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:40.965434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:40.965448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:40.965455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.402 [2024-10-07 09:52:40.965462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.402 [2024-10-07 09:52:40.965476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.402 qpair failed and we were unable to recover it. 00:31:41.402 [2024-10-07 09:52:40.975274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:40.975325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:40.975338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:40.975345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.402 [2024-10-07 09:52:40.975352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.402 [2024-10-07 09:52:40.975366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.402 qpair failed and we were unable to recover it. 00:31:41.402 [2024-10-07 09:52:40.985326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:40.985377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:40.985390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:40.985397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.402 [2024-10-07 09:52:40.985404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.402 [2024-10-07 09:52:40.985418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.402 qpair failed and we were unable to recover it. 00:31:41.402 [2024-10-07 09:52:40.995510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:40.995565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:40.995578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:40.995585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.402 [2024-10-07 09:52:40.995592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.402 [2024-10-07 09:52:40.995606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.402 qpair failed and we were unable to recover it. 00:31:41.402 [2024-10-07 09:52:41.005353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.402 [2024-10-07 09:52:41.005400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.402 [2024-10-07 09:52:41.005414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.402 [2024-10-07 09:52:41.005421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.403 [2024-10-07 09:52:41.005427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.403 [2024-10-07 09:52:41.005441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.403 qpair failed and we were unable to recover it. 00:31:41.403 [2024-10-07 09:52:41.015518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.403 [2024-10-07 09:52:41.015566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.403 [2024-10-07 09:52:41.015580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.403 [2024-10-07 09:52:41.015591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.403 [2024-10-07 09:52:41.015597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.403 [2024-10-07 09:52:41.015612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.403 qpair failed and we were unable to recover it. 00:31:41.403 [2024-10-07 09:52:41.025537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.403 [2024-10-07 09:52:41.025597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.403 [2024-10-07 09:52:41.025611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.403 [2024-10-07 09:52:41.025623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.403 [2024-10-07 09:52:41.025629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.403 [2024-10-07 09:52:41.025644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.403 qpair failed and we were unable to recover it. 00:31:41.403 [2024-10-07 09:52:41.035621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.403 [2024-10-07 09:52:41.035678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.403 [2024-10-07 09:52:41.035692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.403 [2024-10-07 09:52:41.035699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.403 [2024-10-07 09:52:41.035705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.403 [2024-10-07 09:52:41.035719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.403 qpair failed and we were unable to recover it. 00:31:41.403 [2024-10-07 09:52:41.045590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.403 [2024-10-07 09:52:41.045691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.403 [2024-10-07 09:52:41.045705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.403 [2024-10-07 09:52:41.045713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.403 [2024-10-07 09:52:41.045719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.403 [2024-10-07 09:52:41.045733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.403 qpair failed and we were unable to recover it. 00:31:41.403 [2024-10-07 09:52:41.055606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.403 [2024-10-07 09:52:41.055706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.403 [2024-10-07 09:52:41.055720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.403 [2024-10-07 09:52:41.055728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.403 [2024-10-07 09:52:41.055734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.403 [2024-10-07 09:52:41.055748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.403 qpair failed and we were unable to recover it. 00:31:41.665 [2024-10-07 09:52:41.065496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.665 [2024-10-07 09:52:41.065544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.665 [2024-10-07 09:52:41.065557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.665 [2024-10-07 09:52:41.065565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.665 [2024-10-07 09:52:41.065571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.665 [2024-10-07 09:52:41.065585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.665 qpair failed and we were unable to recover it. 00:31:41.665 [2024-10-07 09:52:41.075708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.665 [2024-10-07 09:52:41.075769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.665 [2024-10-07 09:52:41.075782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.665 [2024-10-07 09:52:41.075790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.665 [2024-10-07 09:52:41.075796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.665 [2024-10-07 09:52:41.075811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.665 qpair failed and we were unable to recover it. 00:31:41.665 [2024-10-07 09:52:41.085671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.665 [2024-10-07 09:52:41.085721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.665 [2024-10-07 09:52:41.085735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.665 [2024-10-07 09:52:41.085742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.665 [2024-10-07 09:52:41.085749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.665 [2024-10-07 09:52:41.085763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.665 qpair failed and we were unable to recover it. 00:31:41.665 [2024-10-07 09:52:41.095579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.665 [2024-10-07 09:52:41.095666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.665 [2024-10-07 09:52:41.095680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.665 [2024-10-07 09:52:41.095687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.665 [2024-10-07 09:52:41.095693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.665 [2024-10-07 09:52:41.095707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.665 qpair failed and we were unable to recover it. 00:31:41.665 [2024-10-07 09:52:41.105723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.665 [2024-10-07 09:52:41.105808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.665 [2024-10-07 09:52:41.105825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.665 [2024-10-07 09:52:41.105832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.665 [2024-10-07 09:52:41.105838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.665 [2024-10-07 09:52:41.105853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.665 qpair failed and we were unable to recover it. 00:31:41.665 [2024-10-07 09:52:41.115814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.665 [2024-10-07 09:52:41.115882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.665 [2024-10-07 09:52:41.115896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.665 [2024-10-07 09:52:41.115903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.665 [2024-10-07 09:52:41.115909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.665 [2024-10-07 09:52:41.115923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.665 qpair failed and we were unable to recover it. 00:31:41.665 [2024-10-07 09:52:41.125779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.665 [2024-10-07 09:52:41.125828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.665 [2024-10-07 09:52:41.125842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.665 [2024-10-07 09:52:41.125849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.665 [2024-10-07 09:52:41.125855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.665 [2024-10-07 09:52:41.125869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.665 qpair failed and we were unable to recover it. 00:31:41.665 [2024-10-07 09:52:41.135803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.665 [2024-10-07 09:52:41.135851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.665 [2024-10-07 09:52:41.135864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.665 [2024-10-07 09:52:41.135871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.665 [2024-10-07 09:52:41.135878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.665 [2024-10-07 09:52:41.135892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.145850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.145904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.145917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.145924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.145930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.145948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.155925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.155980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.155994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.156001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.156007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.156021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.165793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.165847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.165861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.165868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.165874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.165888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.175893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.175942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.175955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.175962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.175969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.175983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.185824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.185873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.185887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.185894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.185900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.185914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.196027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.196103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.196120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.196127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.196133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.196147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.206022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.206072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.206086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.206093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.206099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.206113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.216000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.216050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.216064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.216071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.216078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.216092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.226073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.226121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.226134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.226142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.226148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.226162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.236105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.236166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.236179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.236187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.236193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.236214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.246148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.246196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.246209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.246217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.246223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.246237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.256127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.256169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.256182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.256189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.256195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.256209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.266048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.266095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.266110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.666 [2024-10-07 09:52:41.266118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.666 [2024-10-07 09:52:41.266124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.666 [2024-10-07 09:52:41.266139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.666 qpair failed and we were unable to recover it. 00:31:41.666 [2024-10-07 09:52:41.276260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.666 [2024-10-07 09:52:41.276338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.666 [2024-10-07 09:52:41.276352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.667 [2024-10-07 09:52:41.276359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.667 [2024-10-07 09:52:41.276365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.667 [2024-10-07 09:52:41.276379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.667 qpair failed and we were unable to recover it. 00:31:41.667 [2024-10-07 09:52:41.286256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.667 [2024-10-07 09:52:41.286311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.667 [2024-10-07 09:52:41.286328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.667 [2024-10-07 09:52:41.286335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.667 [2024-10-07 09:52:41.286342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.667 [2024-10-07 09:52:41.286356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.667 qpair failed and we were unable to recover it. 00:31:41.667 [2024-10-07 09:52:41.296129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.667 [2024-10-07 09:52:41.296180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.667 [2024-10-07 09:52:41.296194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.667 [2024-10-07 09:52:41.296201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.667 [2024-10-07 09:52:41.296207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.667 [2024-10-07 09:52:41.296221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.667 qpair failed and we were unable to recover it. 00:31:41.667 [2024-10-07 09:52:41.306283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.667 [2024-10-07 09:52:41.306336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.667 [2024-10-07 09:52:41.306349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.667 [2024-10-07 09:52:41.306356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.667 [2024-10-07 09:52:41.306363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.667 [2024-10-07 09:52:41.306377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.667 qpair failed and we were unable to recover it. 00:31:41.667 [2024-10-07 09:52:41.316366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.667 [2024-10-07 09:52:41.316422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.667 [2024-10-07 09:52:41.316436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.667 [2024-10-07 09:52:41.316443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.667 [2024-10-07 09:52:41.316449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.667 [2024-10-07 09:52:41.316463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.667 qpair failed and we were unable to recover it. 00:31:41.929 [2024-10-07 09:52:41.326358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.929 [2024-10-07 09:52:41.326482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.929 [2024-10-07 09:52:41.326495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.929 [2024-10-07 09:52:41.326502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.929 [2024-10-07 09:52:41.326513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.929 [2024-10-07 09:52:41.326527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.929 qpair failed and we were unable to recover it. 00:31:41.929 [2024-10-07 09:52:41.336273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.929 [2024-10-07 09:52:41.336327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.929 [2024-10-07 09:52:41.336340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.929 [2024-10-07 09:52:41.336347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.929 [2024-10-07 09:52:41.336354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.929 [2024-10-07 09:52:41.336367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.929 qpair failed and we were unable to recover it. 00:31:41.929 [2024-10-07 09:52:41.346271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.929 [2024-10-07 09:52:41.346322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.929 [2024-10-07 09:52:41.346335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.929 [2024-10-07 09:52:41.346342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.929 [2024-10-07 09:52:41.346348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.929 [2024-10-07 09:52:41.346362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.929 qpair failed and we were unable to recover it. 00:31:41.929 [2024-10-07 09:52:41.356465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.929 [2024-10-07 09:52:41.356517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.929 [2024-10-07 09:52:41.356530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.929 [2024-10-07 09:52:41.356537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.929 [2024-10-07 09:52:41.356543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.929 [2024-10-07 09:52:41.356557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.929 qpair failed and we were unable to recover it. 00:31:41.929 [2024-10-07 09:52:41.366466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.929 [2024-10-07 09:52:41.366517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.929 [2024-10-07 09:52:41.366530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.929 [2024-10-07 09:52:41.366537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.929 [2024-10-07 09:52:41.366544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.929 [2024-10-07 09:52:41.366558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.929 qpair failed and we were unable to recover it. 00:31:41.929 [2024-10-07 09:52:41.376485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.929 [2024-10-07 09:52:41.376542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.929 [2024-10-07 09:52:41.376555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.929 [2024-10-07 09:52:41.376562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.929 [2024-10-07 09:52:41.376569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.929 [2024-10-07 09:52:41.376583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.929 qpair failed and we were unable to recover it. 00:31:41.929 [2024-10-07 09:52:41.386522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.929 [2024-10-07 09:52:41.386569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.929 [2024-10-07 09:52:41.386583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.929 [2024-10-07 09:52:41.386590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.929 [2024-10-07 09:52:41.386596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.929 [2024-10-07 09:52:41.386610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.929 qpair failed and we were unable to recover it. 00:31:41.929 [2024-10-07 09:52:41.396577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.396637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.396651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.396658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.396664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.396679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.406560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.406610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.406626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.406633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.406640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.406654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.416586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.416639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.416653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.416660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.416670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.416684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.426611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.426666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.426680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.426687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.426694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.426708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.436666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.436740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.436754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.436761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.436769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.436784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.446676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.446768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.446782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.446789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.446795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.446810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.456680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.456739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.456753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.456761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.456767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.456787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.466594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.466641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.466657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.466664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.466670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.466685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.476813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.476884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.476899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.476906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.476912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.476926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.486675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.486730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.486744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.486751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.486757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.486771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.496823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.496909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.496923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.496930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.496936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.496950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.506831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.506877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.506891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.506901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.506908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.506922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.516902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.516960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.516975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.930 [2024-10-07 09:52:41.516982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.930 [2024-10-07 09:52:41.516990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.930 [2024-10-07 09:52:41.517009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.930 qpair failed and we were unable to recover it. 00:31:41.930 [2024-10-07 09:52:41.526909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.930 [2024-10-07 09:52:41.526956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.930 [2024-10-07 09:52:41.526970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.931 [2024-10-07 09:52:41.526978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.931 [2024-10-07 09:52:41.526984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.931 [2024-10-07 09:52:41.526998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.931 qpair failed and we were unable to recover it. 00:31:41.931 [2024-10-07 09:52:41.536935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.931 [2024-10-07 09:52:41.536978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.931 [2024-10-07 09:52:41.536991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.931 [2024-10-07 09:52:41.536999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.931 [2024-10-07 09:52:41.537005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.931 [2024-10-07 09:52:41.537019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.931 qpair failed and we were unable to recover it. 00:31:41.931 [2024-10-07 09:52:41.546933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.931 [2024-10-07 09:52:41.547049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.931 [2024-10-07 09:52:41.547063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.931 [2024-10-07 09:52:41.547070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.931 [2024-10-07 09:52:41.547077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.931 [2024-10-07 09:52:41.547091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.931 qpair failed and we were unable to recover it. 00:31:41.931 [2024-10-07 09:52:41.557028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.931 [2024-10-07 09:52:41.557081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.931 [2024-10-07 09:52:41.557095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.931 [2024-10-07 09:52:41.557102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.931 [2024-10-07 09:52:41.557108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.931 [2024-10-07 09:52:41.557122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.931 qpair failed and we were unable to recover it. 00:31:41.931 [2024-10-07 09:52:41.567008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.931 [2024-10-07 09:52:41.567063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.931 [2024-10-07 09:52:41.567076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.931 [2024-10-07 09:52:41.567084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.931 [2024-10-07 09:52:41.567090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.931 [2024-10-07 09:52:41.567104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.931 qpair failed and we were unable to recover it. 00:31:41.931 [2024-10-07 09:52:41.577005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.931 [2024-10-07 09:52:41.577053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.931 [2024-10-07 09:52:41.577067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.931 [2024-10-07 09:52:41.577074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.931 [2024-10-07 09:52:41.577080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.931 [2024-10-07 09:52:41.577094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.931 qpair failed and we were unable to recover it. 00:31:41.931 [2024-10-07 09:52:41.587045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.931 [2024-10-07 09:52:41.587135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.931 [2024-10-07 09:52:41.587149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.931 [2024-10-07 09:52:41.587156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.931 [2024-10-07 09:52:41.587162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:41.931 [2024-10-07 09:52:41.587176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.931 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.597124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.597178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.597194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.193 [2024-10-07 09:52:41.597201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.193 [2024-10-07 09:52:41.597208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.193 [2024-10-07 09:52:41.597221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.606982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.607034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.607047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.193 [2024-10-07 09:52:41.607055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.193 [2024-10-07 09:52:41.607061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.193 [2024-10-07 09:52:41.607075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.617100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.617151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.617164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.193 [2024-10-07 09:52:41.617172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.193 [2024-10-07 09:52:41.617178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.193 [2024-10-07 09:52:41.617192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.627156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.627207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.627221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.193 [2024-10-07 09:52:41.627228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.193 [2024-10-07 09:52:41.627235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.193 [2024-10-07 09:52:41.627248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.637098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.637156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.637169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.193 [2024-10-07 09:52:41.637176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.193 [2024-10-07 09:52:41.637183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.193 [2024-10-07 09:52:41.637200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.647241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.647291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.647305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.193 [2024-10-07 09:52:41.647312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.193 [2024-10-07 09:52:41.647318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.193 [2024-10-07 09:52:41.647332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.657235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.657281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.657294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.193 [2024-10-07 09:52:41.657301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.193 [2024-10-07 09:52:41.657307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.193 [2024-10-07 09:52:41.657321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.667313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.667362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.667376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.193 [2024-10-07 09:52:41.667383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.193 [2024-10-07 09:52:41.667389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.193 [2024-10-07 09:52:41.667403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.677355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.677424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.677437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.193 [2024-10-07 09:52:41.677445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.193 [2024-10-07 09:52:41.677451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.193 [2024-10-07 09:52:41.677465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.687358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.687407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.687424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.193 [2024-10-07 09:52:41.687431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.193 [2024-10-07 09:52:41.687438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.193 [2024-10-07 09:52:41.687452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-07 09:52:41.697340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.193 [2024-10-07 09:52:41.697389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.193 [2024-10-07 09:52:41.697403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.697410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.697416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.697430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.707373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.707418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.707432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.707439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.707446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.707461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.717448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.717507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.717532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.717541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.717548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.717567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.727442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.727498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.727513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.727521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.727528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.727547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.737517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.737565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.737579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.737586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.737593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.737608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.747380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.747443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.747456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.747463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.747469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.747484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.757647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.757742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.757755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.757762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.757769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.757784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.767568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.767655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.767669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.767676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.767682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.767697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.777571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.777627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.777646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.777655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.777663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.777679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.787570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.787634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.787649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.787656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.787663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.787678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.797681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.797737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.797750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.797757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.797763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.797778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.807670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.807742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.807756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.807763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.807769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.807785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.817735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.194 [2024-10-07 09:52:41.817785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.194 [2024-10-07 09:52:41.817799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.194 [2024-10-07 09:52:41.817806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.194 [2024-10-07 09:52:41.817816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.194 [2024-10-07 09:52:41.817831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-07 09:52:41.827750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.195 [2024-10-07 09:52:41.827798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.195 [2024-10-07 09:52:41.827812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.195 [2024-10-07 09:52:41.827819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.195 [2024-10-07 09:52:41.827825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.195 [2024-10-07 09:52:41.827839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-07 09:52:41.837842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.195 [2024-10-07 09:52:41.837895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.195 [2024-10-07 09:52:41.837909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.195 [2024-10-07 09:52:41.837916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.195 [2024-10-07 09:52:41.837922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.195 [2024-10-07 09:52:41.837937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-07 09:52:41.847664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.195 [2024-10-07 09:52:41.847718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.195 [2024-10-07 09:52:41.847731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.195 [2024-10-07 09:52:41.847738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.195 [2024-10-07 09:52:41.847744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.195 [2024-10-07 09:52:41.847758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.457 [2024-10-07 09:52:41.857790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.457 [2024-10-07 09:52:41.857841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.457 [2024-10-07 09:52:41.857854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.457 [2024-10-07 09:52:41.857861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.457 [2024-10-07 09:52:41.857868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.457 [2024-10-07 09:52:41.857882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.457 qpair failed and we were unable to recover it. 00:31:42.457 [2024-10-07 09:52:41.867809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.457 [2024-10-07 09:52:41.867865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.457 [2024-10-07 09:52:41.867879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.457 [2024-10-07 09:52:41.867886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.457 [2024-10-07 09:52:41.867893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.457 [2024-10-07 09:52:41.867907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.457 qpair failed and we were unable to recover it. 00:31:42.457 [2024-10-07 09:52:41.877905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.457 [2024-10-07 09:52:41.877960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.457 [2024-10-07 09:52:41.877973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.457 [2024-10-07 09:52:41.877980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.457 [2024-10-07 09:52:41.877986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.457 [2024-10-07 09:52:41.878000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.457 qpair failed and we were unable to recover it. 00:31:42.457 [2024-10-07 09:52:41.887871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.457 [2024-10-07 09:52:41.887919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.457 [2024-10-07 09:52:41.887932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.457 [2024-10-07 09:52:41.887939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.457 [2024-10-07 09:52:41.887946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.457 [2024-10-07 09:52:41.887960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.457 qpair failed and we were unable to recover it. 00:31:42.457 [2024-10-07 09:52:41.897869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.457 [2024-10-07 09:52:41.897916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.457 [2024-10-07 09:52:41.897930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.457 [2024-10-07 09:52:41.897937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.457 [2024-10-07 09:52:41.897943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.457 [2024-10-07 09:52:41.897957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.457 qpair failed and we were unable to recover it. 00:31:42.457 [2024-10-07 09:52:41.907928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.457 [2024-10-07 09:52:41.908022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.457 [2024-10-07 09:52:41.908036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.457 [2024-10-07 09:52:41.908043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.457 [2024-10-07 09:52:41.908057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.457 [2024-10-07 09:52:41.908071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.457 qpair failed and we were unable to recover it. 00:31:42.457 [2024-10-07 09:52:41.918018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.457 [2024-10-07 09:52:41.918093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.457 [2024-10-07 09:52:41.918107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.457 [2024-10-07 09:52:41.918115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.457 [2024-10-07 09:52:41.918122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.457 [2024-10-07 09:52:41.918137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.457 qpair failed and we were unable to recover it. 00:31:42.457 [2024-10-07 09:52:41.927992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.457 [2024-10-07 09:52:41.928040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.457 [2024-10-07 09:52:41.928053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.457 [2024-10-07 09:52:41.928060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.457 [2024-10-07 09:52:41.928066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.457 [2024-10-07 09:52:41.928080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.457 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:41.938000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:41.938049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:41.938064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:41.938071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:41.938077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:41.938095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:41.948018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:41.948064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:41.948077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:41.948084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:41.948091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:41.948105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:41.958091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:41.958144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:41.958158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:41.958165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:41.958171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:41.958185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:41.968088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:41.968141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:41.968155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:41.968162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:41.968168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:41.968182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:41.978131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:41.978218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:41.978231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:41.978238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:41.978244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:41.978259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:41.988116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:41.988203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:41.988216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:41.988223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:41.988230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:41.988244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:41.998200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:41.998252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:41.998265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:41.998279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:41.998286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:41.998300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:42.008114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:42.008167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:42.008180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:42.008188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:42.008194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:42.008208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:42.018206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:42.018256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:42.018270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:42.018277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:42.018283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:42.018297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:42.028227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:42.028274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:42.028288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:42.028295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:42.028301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:42.028315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:42.038300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:42.038352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:42.038369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:42.038376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:42.038382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:42.038398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:42.048337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:42.048408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:42.048433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:42.048442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:42.048449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:42.048468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:42.058312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.458 [2024-10-07 09:52:42.058364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.458 [2024-10-07 09:52:42.058379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.458 [2024-10-07 09:52:42.058387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.458 [2024-10-07 09:52:42.058393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.458 [2024-10-07 09:52:42.058409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.458 qpair failed and we were unable to recover it. 00:31:42.458 [2024-10-07 09:52:42.068334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.459 [2024-10-07 09:52:42.068395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.459 [2024-10-07 09:52:42.068420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.459 [2024-10-07 09:52:42.068429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.459 [2024-10-07 09:52:42.068436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.459 [2024-10-07 09:52:42.068454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.459 qpair failed and we were unable to recover it. 00:31:42.459 [2024-10-07 09:52:42.078392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.459 [2024-10-07 09:52:42.078447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.459 [2024-10-07 09:52:42.078462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.459 [2024-10-07 09:52:42.078470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.459 [2024-10-07 09:52:42.078476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.459 [2024-10-07 09:52:42.078491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.459 qpair failed and we were unable to recover it. 00:31:42.459 [2024-10-07 09:52:42.088426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.459 [2024-10-07 09:52:42.088478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.459 [2024-10-07 09:52:42.088492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.459 [2024-10-07 09:52:42.088504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.459 [2024-10-07 09:52:42.088511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.459 [2024-10-07 09:52:42.088526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.459 qpair failed and we were unable to recover it. 00:31:42.459 [2024-10-07 09:52:42.098427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.459 [2024-10-07 09:52:42.098476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.459 [2024-10-07 09:52:42.098489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.459 [2024-10-07 09:52:42.098497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.459 [2024-10-07 09:52:42.098503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.459 [2024-10-07 09:52:42.098517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.459 qpair failed and we were unable to recover it. 00:31:42.459 [2024-10-07 09:52:42.108443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.459 [2024-10-07 09:52:42.108489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.459 [2024-10-07 09:52:42.108502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.459 [2024-10-07 09:52:42.108509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.459 [2024-10-07 09:52:42.108516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.459 [2024-10-07 09:52:42.108530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.459 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.118527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.118628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.118642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.118651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.118658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.118673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.128532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.128580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.128594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.128601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.128608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.128626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.138401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.138449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.138463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.138471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.138478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.138492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.148586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.148633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.148647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.148655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.148661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.148676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.158638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.158709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.158722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.158729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.158736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.158750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.168634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.168685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.168698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.168705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.168712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.168726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.178653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.178700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.178717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.178724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.178730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.178744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.188688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.188738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.188752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.188759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.188765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.188780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.198622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.198686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.198700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.198707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.198713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.198727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.208716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.208768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.208781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.208788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.208795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.208809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.218742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.218786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.218800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.218807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.218814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.218831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.228799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.228846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.721 [2024-10-07 09:52:42.228860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.721 [2024-10-07 09:52:42.228867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.721 [2024-10-07 09:52:42.228873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.721 [2024-10-07 09:52:42.228887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.721 qpair failed and we were unable to recover it. 00:31:42.721 [2024-10-07 09:52:42.238871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.721 [2024-10-07 09:52:42.238925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.238939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.238946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.238953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.238967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.248869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.248917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.248930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.248937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.248944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.248958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.258876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.258922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.258936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.258943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.258949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.258963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.268771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.268826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.268844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.268851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.268858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.268872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.279000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.279055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.279068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.279075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.279082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.279096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.288832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.288880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.288894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.288901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.288908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.288922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.298976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.299025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.299038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.299045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.299051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.299065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.309003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.309050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.309064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.309071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.309081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.309095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.319074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.319129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.319143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.319150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.319157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.319171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.329076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.329170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.329183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.329190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.329197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.329211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.339095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.339141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.339154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.339161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.339168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.339182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.349092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.349134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.349147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.349154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.349160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.349174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.359176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.359237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.359250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.359257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.722 [2024-10-07 09:52:42.359264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.722 [2024-10-07 09:52:42.359278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.722 qpair failed and we were unable to recover it. 00:31:42.722 [2024-10-07 09:52:42.369188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.722 [2024-10-07 09:52:42.369239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.722 [2024-10-07 09:52:42.369252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.722 [2024-10-07 09:52:42.369260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.723 [2024-10-07 09:52:42.369266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.723 [2024-10-07 09:52:42.369280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.723 qpair failed and we were unable to recover it. 00:31:42.723 [2024-10-07 09:52:42.379166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.723 [2024-10-07 09:52:42.379212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.723 [2024-10-07 09:52:42.379225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.723 [2024-10-07 09:52:42.379232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.723 [2024-10-07 09:52:42.379239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.723 [2024-10-07 09:52:42.379253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.723 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.389235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.389283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.389296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.985 [2024-10-07 09:52:42.389304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.985 [2024-10-07 09:52:42.389310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.985 [2024-10-07 09:52:42.389324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.985 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.399302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.399382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.399396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.985 [2024-10-07 09:52:42.399403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.985 [2024-10-07 09:52:42.399413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.985 [2024-10-07 09:52:42.399427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.985 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.409256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.409309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.409322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.985 [2024-10-07 09:52:42.409329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.985 [2024-10-07 09:52:42.409336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.985 [2024-10-07 09:52:42.409350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.985 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.419303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.419384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.419398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.985 [2024-10-07 09:52:42.419405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.985 [2024-10-07 09:52:42.419411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.985 [2024-10-07 09:52:42.419425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.985 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.429327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.429373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.429387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.985 [2024-10-07 09:52:42.429394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.985 [2024-10-07 09:52:42.429400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.985 [2024-10-07 09:52:42.429414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.985 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.439401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.439496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.439509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.985 [2024-10-07 09:52:42.439516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.985 [2024-10-07 09:52:42.439522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.985 [2024-10-07 09:52:42.439536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.985 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.449400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.449454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.449468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.985 [2024-10-07 09:52:42.449475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.985 [2024-10-07 09:52:42.449482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.985 [2024-10-07 09:52:42.449495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.985 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.459396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.459444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.459458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.985 [2024-10-07 09:52:42.459465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.985 [2024-10-07 09:52:42.459471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.985 [2024-10-07 09:52:42.459485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.985 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.469446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.469490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.469503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.985 [2024-10-07 09:52:42.469510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.985 [2024-10-07 09:52:42.469517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.985 [2024-10-07 09:52:42.469531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.985 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.479515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.479576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.479591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.985 [2024-10-07 09:52:42.479598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.985 [2024-10-07 09:52:42.479605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.985 [2024-10-07 09:52:42.479627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.985 qpair failed and we were unable to recover it. 00:31:42.985 [2024-10-07 09:52:42.489494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.985 [2024-10-07 09:52:42.489548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.985 [2024-10-07 09:52:42.489563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.489573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.489580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.986 [2024-10-07 09:52:42.489594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.499523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.986 [2024-10-07 09:52:42.499571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.986 [2024-10-07 09:52:42.499585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.499592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.499598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.986 [2024-10-07 09:52:42.499612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.509543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.986 [2024-10-07 09:52:42.509597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.986 [2024-10-07 09:52:42.509610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.509621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.509628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.986 [2024-10-07 09:52:42.509642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.519492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.986 [2024-10-07 09:52:42.519547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.986 [2024-10-07 09:52:42.519561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.519568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.519574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.986 [2024-10-07 09:52:42.519588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.529486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.986 [2024-10-07 09:52:42.529533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.986 [2024-10-07 09:52:42.529547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.529554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.529560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.986 [2024-10-07 09:52:42.529574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.539628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.986 [2024-10-07 09:52:42.539673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.986 [2024-10-07 09:52:42.539687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.539694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.539700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd70000b90 00:31:42.986 [2024-10-07 09:52:42.539714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.549659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.986 [2024-10-07 09:52:42.549757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.986 [2024-10-07 09:52:42.549820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.549847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.549868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd64000b90 00:31:42.986 [2024-10-07 09:52:42.549922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.559770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.986 [2024-10-07 09:52:42.559864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.986 [2024-10-07 09:52:42.559895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.559911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.559925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd64000b90 00:31:42.986 [2024-10-07 09:52:42.559956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.569720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.986 [2024-10-07 09:52:42.569778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.986 [2024-10-07 09:52:42.569799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.569810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.569821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd64000b90 00:31:42.986 [2024-10-07 09:52:42.569843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.579721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.986 [2024-10-07 09:52:42.579852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.986 [2024-10-07 09:52:42.579917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.579952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.579974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x85c550 00:31:42.986 [2024-10-07 09:52:42.580026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.589766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.986 [2024-10-07 09:52:42.589835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.986 [2024-10-07 09:52:42.589866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.986 [2024-10-07 09:52:42.589882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.986 [2024-10-07 09:52:42.589896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x85c550 00:31:42.986 [2024-10-07 09:52:42.589927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.986 qpair failed and we were unable to recover it. 00:31:42.986 [2024-10-07 09:52:42.590088] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:31:42.986 A controller has encountered a failure and is being reset. 00:31:42.986 Controller properly reset. 00:31:42.986 Initializing NVMe Controllers 00:31:42.986 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:42.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:42.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:42.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:42.986 Initialization complete. Launching workers. 00:31:42.986 Starting thread on core 1 00:31:42.986 Starting thread on core 2 00:31:42.986 Starting thread on core 3 00:31:42.986 Starting thread on core 0 00:31:42.986 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:42.986 00:31:42.986 real 0m11.416s 00:31:42.986 user 0m21.679s 00:31:42.986 sys 0m3.580s 00:31:42.986 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:42.986 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.986 ************************************ 00:31:42.986 END TEST nvmf_target_disconnect_tc2 00:31:42.986 ************************************ 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:43.248 rmmod nvme_tcp 00:31:43.248 rmmod nvme_fabrics 00:31:43.248 rmmod nvme_keyring 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 3553774 ']' 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 3553774 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' -z 3553774 ']' 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # kill -0 3553774 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # uname 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3553774 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # process_name=reactor_4 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@963 -- # '[' reactor_4 = sudo ']' 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3553774' 00:31:43.248 killing process with pid 3553774 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # kill 3553774 00:31:43.248 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@977 -- # wait 3553774 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.510 09:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.506 09:52:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:45.506 00:31:45.506 real 0m22.102s 00:31:45.506 user 0m49.529s 00:31:45.506 sys 0m9.923s 00:31:45.506 09:52:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:45.506 09:52:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:45.506 ************************************ 00:31:45.506 END TEST nvmf_target_disconnect 00:31:45.506 ************************************ 00:31:45.506 09:52:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:45.506 00:31:45.506 real 6m37.078s 00:31:45.506 user 11m21.095s 00:31:45.506 sys 2m18.635s 00:31:45.506 09:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:45.506 09:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.506 ************************************ 00:31:45.506 END TEST nvmf_host 00:31:45.506 ************************************ 00:31:45.506 09:52:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:45.506 09:52:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:45.506 09:52:45 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:45.506 09:52:45 nvmf_tcp -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:31:45.506 09:52:45 nvmf_tcp -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:45.506 09:52:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:45.767 ************************************ 00:31:45.767 START TEST nvmf_target_core_interrupt_mode 00:31:45.767 ************************************ 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:45.767 * Looking for test storage... 00:31:45.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1626 -- # lcov --version 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.767 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:31:45.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.768 --rc genhtml_branch_coverage=1 00:31:45.768 --rc genhtml_function_coverage=1 00:31:45.768 --rc genhtml_legend=1 00:31:45.768 --rc geninfo_all_blocks=1 00:31:45.768 --rc geninfo_unexecuted_blocks=1 00:31:45.768 00:31:45.768 ' 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:31:45.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.768 --rc genhtml_branch_coverage=1 00:31:45.768 --rc genhtml_function_coverage=1 00:31:45.768 --rc genhtml_legend=1 00:31:45.768 --rc geninfo_all_blocks=1 00:31:45.768 --rc geninfo_unexecuted_blocks=1 00:31:45.768 00:31:45.768 ' 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:31:45.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.768 --rc genhtml_branch_coverage=1 00:31:45.768 --rc genhtml_function_coverage=1 00:31:45.768 --rc genhtml_legend=1 00:31:45.768 --rc geninfo_all_blocks=1 00:31:45.768 --rc geninfo_unexecuted_blocks=1 00:31:45.768 00:31:45.768 ' 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:31:45.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.768 --rc genhtml_branch_coverage=1 00:31:45.768 --rc genhtml_function_coverage=1 00:31:45.768 --rc genhtml_legend=1 00:31:45.768 --rc geninfo_all_blocks=1 00:31:45.768 --rc geninfo_unexecuted_blocks=1 00:31:45.768 00:31:45.768 ' 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.768 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.029 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:46.029 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:46.029 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.029 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.029 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.029 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:46.030 ************************************ 00:31:46.030 START TEST nvmf_abort 00:31:46.030 ************************************ 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:46.030 * Looking for test storage... 00:31:46.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1626 -- # lcov --version 00:31:46.030 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:31:46.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.292 --rc genhtml_branch_coverage=1 00:31:46.292 --rc genhtml_function_coverage=1 00:31:46.292 --rc genhtml_legend=1 00:31:46.292 --rc geninfo_all_blocks=1 00:31:46.292 --rc geninfo_unexecuted_blocks=1 00:31:46.292 00:31:46.292 ' 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:31:46.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.292 --rc genhtml_branch_coverage=1 00:31:46.292 --rc genhtml_function_coverage=1 00:31:46.292 --rc genhtml_legend=1 00:31:46.292 --rc geninfo_all_blocks=1 00:31:46.292 --rc geninfo_unexecuted_blocks=1 00:31:46.292 00:31:46.292 ' 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:31:46.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.292 --rc genhtml_branch_coverage=1 00:31:46.292 --rc genhtml_function_coverage=1 00:31:46.292 --rc genhtml_legend=1 00:31:46.292 --rc geninfo_all_blocks=1 00:31:46.292 --rc geninfo_unexecuted_blocks=1 00:31:46.292 00:31:46.292 ' 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:31:46.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.292 --rc genhtml_branch_coverage=1 00:31:46.292 --rc genhtml_function_coverage=1 00:31:46.292 --rc genhtml_legend=1 00:31:46.292 --rc geninfo_all_blocks=1 00:31:46.292 --rc geninfo_unexecuted_blocks=1 00:31:46.292 00:31:46.292 ' 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:46.292 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:46.293 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:54.436 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:54.436 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:54.436 Found net devices under 0000:31:00.0: cvl_0_0 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:54.436 Found net devices under 0000:31:00.1: cvl_0_1 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.436 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:54.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:31:54.437 00:31:54.437 --- 10.0.0.2 ping statistics --- 00:31:54.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.437 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:31:54.437 00:31:54.437 --- 10.0.0.1 ping statistics --- 00:31:54.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.437 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3559306 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3559306 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@834 -- # '[' -z 3559306 ']' 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local max_retries=100 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@843 -- # xtrace_disable 00:31:54.437 09:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.437 [2024-10-07 09:52:53.555695] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:54.437 [2024-10-07 09:52:53.556830] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:54.437 [2024-10-07 09:52:53.556882] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.437 [2024-10-07 09:52:53.647801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:54.437 [2024-10-07 09:52:53.741853] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.437 [2024-10-07 09:52:53.741908] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.437 [2024-10-07 09:52:53.741917] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.437 [2024-10-07 09:52:53.741924] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.437 [2024-10-07 09:52:53.741930] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.437 [2024-10-07 09:52:53.743260] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.437 [2024-10-07 09:52:53.743420] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.437 [2024-10-07 09:52:53.743420] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.437 [2024-10-07 09:52:53.837157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:54.437 [2024-10-07 09:52:53.838217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:54.437 [2024-10-07 09:52:53.838298] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:54.437 [2024-10-07 09:52:53.838523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@867 -- # return 0 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@733 -- # xtrace_disable 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.011 [2024-10-07 09:52:54.424327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.011 Malloc0 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.011 Delay0 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.011 [2024-10-07 09:52:54.512308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:55.011 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:55.011 [2024-10-07 09:52:54.629383] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:57.555 Initializing NVMe Controllers 00:31:57.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:57.555 controller IO queue size 128 less than required 00:31:57.555 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:57.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:57.555 Initialization complete. Launching workers. 00:31:57.555 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 23435 00:31:57.555 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23492, failed to submit 66 00:31:57.555 success 23435, unsuccessful 57, failed 0 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.555 rmmod nvme_tcp 00:31:57.555 rmmod nvme_fabrics 00:31:57.555 rmmod nvme_keyring 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3559306 ']' 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3559306 00:31:57.555 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@953 -- # '[' -z 3559306 ']' 00:31:57.556 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # kill -0 3559306 00:31:57.556 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # uname 00:31:57.556 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:31:57.556 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3559306 00:31:57.556 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:31:57.556 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:31:57.556 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3559306' 00:31:57.556 killing process with pid 3559306 00:31:57.556 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # kill 3559306 00:31:57.556 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@977 -- # wait 3559306 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.556 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.490 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:59.490 00:31:59.490 real 0m13.609s 00:31:59.490 user 0m10.798s 00:31:59.490 sys 0m7.164s 00:31:59.490 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:59.490 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.490 ************************************ 00:31:59.490 END TEST nvmf_abort 00:31:59.490 ************************************ 00:31:59.490 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:59.490 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:31:59.490 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:59.490 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:59.750 ************************************ 00:31:59.750 START TEST nvmf_ns_hotplug_stress 00:31:59.750 ************************************ 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:59.750 * Looking for test storage... 00:31:59.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1626 -- # lcov --version 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:59.750 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:32:00.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.012 --rc genhtml_branch_coverage=1 00:32:00.012 --rc genhtml_function_coverage=1 00:32:00.012 --rc genhtml_legend=1 00:32:00.012 --rc geninfo_all_blocks=1 00:32:00.012 --rc geninfo_unexecuted_blocks=1 00:32:00.012 00:32:00.012 ' 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:32:00.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.012 --rc genhtml_branch_coverage=1 00:32:00.012 --rc genhtml_function_coverage=1 00:32:00.012 --rc genhtml_legend=1 00:32:00.012 --rc geninfo_all_blocks=1 00:32:00.012 --rc geninfo_unexecuted_blocks=1 00:32:00.012 00:32:00.012 ' 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:32:00.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.012 --rc genhtml_branch_coverage=1 00:32:00.012 --rc genhtml_function_coverage=1 00:32:00.012 --rc genhtml_legend=1 00:32:00.012 --rc geninfo_all_blocks=1 00:32:00.012 --rc geninfo_unexecuted_blocks=1 00:32:00.012 00:32:00.012 ' 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:32:00.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.012 --rc genhtml_branch_coverage=1 00:32:00.012 --rc genhtml_function_coverage=1 00:32:00.012 --rc genhtml_legend=1 00:32:00.012 --rc geninfo_all_blocks=1 00:32:00.012 --rc geninfo_unexecuted_blocks=1 00:32:00.012 00:32:00.012 ' 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.012 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.013 09:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:08.153 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.153 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:08.154 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:08.154 Found net devices under 0000:31:00.0: cvl_0_0 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:08.154 Found net devices under 0000:31:00.1: cvl_0_1 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.154 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:08.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:32:08.154 00:32:08.154 --- 10.0.0.2 ping statistics --- 00:32:08.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.154 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:32:08.154 00:32:08.154 --- 10.0.0.1 ping statistics --- 00:32:08.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.154 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.154 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3564362 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3564362 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # '[' -z 3564362 ']' 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local max_retries=100 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@843 -- # xtrace_disable 00:32:08.155 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:08.155 [2024-10-07 09:53:07.294234] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:08.155 [2024-10-07 09:53:07.295362] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:32:08.155 [2024-10-07 09:53:07.295414] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.155 [2024-10-07 09:53:07.383284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:08.155 [2024-10-07 09:53:07.477081] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.155 [2024-10-07 09:53:07.477135] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.155 [2024-10-07 09:53:07.477144] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.155 [2024-10-07 09:53:07.477151] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.155 [2024-10-07 09:53:07.477158] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.155 [2024-10-07 09:53:07.478399] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:08.155 [2024-10-07 09:53:07.478562] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.155 [2024-10-07 09:53:07.478562] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:08.155 [2024-10-07 09:53:07.564113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:08.155 [2024-10-07 09:53:07.565145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:08.155 [2024-10-07 09:53:07.565169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:08.155 [2024-10-07 09:53:07.565421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:08.727 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:32:08.727 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@867 -- # return 0 00:32:08.727 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:08.727 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@733 -- # xtrace_disable 00:32:08.727 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:08.727 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.727 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:08.727 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:08.727 [2024-10-07 09:53:08.323442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.727 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:08.987 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.248 [2024-10-07 09:53:08.736386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.248 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:09.509 09:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:09.509 Malloc0 00:32:09.770 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:09.770 Delay0 00:32:09.770 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.031 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:10.291 NULL1 00:32:10.291 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:10.552 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3564740 00:32:10.552 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:10.552 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:10.552 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.497 Read completed with error (sct=0, sc=11) 00:32:11.497 09:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:11.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:11.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:11.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:11.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:11.757 09:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:11.757 09:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:12.019 true 00:32:12.019 09:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:12.019 09:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:12.963 09:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.963 09:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:12.963 09:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:13.224 true 00:32:13.224 09:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:13.224 09:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.485 09:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.485 09:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:13.485 09:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:13.746 true 00:32:13.746 09:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:13.746 09:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:14.688 09:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:14.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:14.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:14.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:14.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:14.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:14.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:14.948 09:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:14.948 09:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:15.208 true 00:32:15.208 09:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:15.208 09:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.148 09:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.148 09:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:16.148 09:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:16.407 true 00:32:16.407 09:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:16.407 09:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.667 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.667 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:16.667 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:16.927 true 00:32:16.927 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:16.927 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:17.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:17.186 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:17.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:17.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:17.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:17.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:17.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:17.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:17.186 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:17.186 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:17.446 true 00:32:17.446 09:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:17.446 09:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.386 09:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.386 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:18.386 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:18.646 true 00:32:18.646 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:18.646 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.904 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.904 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:18.904 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:19.164 true 00:32:19.164 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:19.164 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:20.546 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:20.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:20.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:20.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:20.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:20.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:20.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:20.546 09:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:20.546 09:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:20.546 true 00:32:20.805 09:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:20.805 09:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.744 09:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.744 09:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:21.744 09:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:21.744 true 00:32:22.003 09:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:22.003 09:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:22.003 09:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:22.263 09:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:22.263 09:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:22.523 true 00:32:22.523 09:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:22.523 09:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:23.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:23.724 09:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:23.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:23.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:23.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:23.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:23.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:23.724 09:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:23.724 09:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:23.985 true 00:32:23.985 09:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:23.985 09:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:24.929 09:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:24.929 09:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:24.929 09:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:25.189 true 00:32:25.189 09:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:25.189 09:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:25.449 09:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:25.449 09:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:25.449 09:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:25.710 true 00:32:25.710 09:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:25.710 09:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:25.971 09:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.231 09:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:26.231 09:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:26.231 true 00:32:26.231 09:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:26.231 09:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.491 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.752 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:26.752 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:26.752 true 00:32:26.752 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:26.752 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:27.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.012 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:27.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.277 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:27.277 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:27.277 true 00:32:27.277 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:27.277 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:28.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.304 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:28.304 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:28.304 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:28.565 true 00:32:28.565 09:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:28.565 09:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:28.829 09:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:28.829 09:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:28.829 09:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:29.100 true 00:32:29.100 09:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:29.100 09:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:30.483 09:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:30.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:30.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:30.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:30.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:30.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:30.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:30.483 09:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:30.483 09:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:30.744 true 00:32:30.744 09:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:30.744 09:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.687 09:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:31.687 09:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:31.687 09:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:31.948 true 00:32:31.948 09:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:31.948 09:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.948 09:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.209 09:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:32.209 09:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:32.469 true 00:32:32.469 09:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:32.469 09:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:33.411 09:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:33.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:33.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:33.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:33.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:33.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:33.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:33.671 09:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:33.671 09:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:33.932 true 00:32:33.932 09:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:33.932 09:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.874 09:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:34.874 09:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:34.874 09:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:35.134 true 00:32:35.134 09:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:35.135 09:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.395 09:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.395 09:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:35.395 09:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:35.656 true 00:32:35.656 09:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:35.656 09:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.041 09:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.041 09:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:37.041 09:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:37.041 true 00:32:37.041 09:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:37.041 09:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.982 09:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.244 09:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:38.244 09:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:38.244 true 00:32:38.244 09:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:38.244 09:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.505 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.765 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:38.765 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:38.765 true 00:32:38.765 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:38.765 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.025 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:39.286 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:39.286 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:39.286 true 00:32:39.547 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:39.547 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.547 09:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:39.809 09:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:32:39.809 09:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:32:40.069 true 00:32:40.069 09:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:40.069 09:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.014 Initializing NVMe Controllers 00:32:41.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:41.014 Controller IO queue size 128, less than required. 00:32:41.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:41.014 Controller IO queue size 128, less than required. 00:32:41.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:41.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:41.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:41.014 Initialization complete. Launching workers. 00:32:41.014 ======================================================== 00:32:41.014 Latency(us) 00:32:41.014 Device Information : IOPS MiB/s Average min max 00:32:41.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2383.07 1.16 34765.55 1705.63 1039702.56 00:32:41.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18435.53 9.00 6942.84 1236.44 432385.68 00:32:41.014 ======================================================== 00:32:41.014 Total : 20818.60 10.17 10127.65 1236.44 1039702.56 00:32:41.014 00:32:41.014 09:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.277 09:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:32:41.277 09:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:32:41.538 true 00:32:41.538 09:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3564740 00:32:41.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3564740) - No such process 00:32:41.538 09:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3564740 00:32:41.538 09:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.538 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:41.800 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:41.800 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:41.800 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:41.800 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:41.800 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:42.062 null0 00:32:42.062 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.062 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.062 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:42.062 null1 00:32:42.062 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.062 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.062 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:42.323 null2 00:32:42.323 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.323 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.323 09:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:42.584 null3 00:32:42.584 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.584 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.584 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:42.584 null4 00:32:42.584 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.584 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.584 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:42.845 null5 00:32:42.845 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.845 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.845 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:43.106 null6 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:43.106 null7 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.106 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3571159 3571161 3571163 3571166 3571169 3571171 3571174 3571176 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.107 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:43.369 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.369 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:43.369 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:43.369 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:43.369 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:43.369 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:43.369 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:43.369 09:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.630 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.631 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:43.631 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.631 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.631 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:43.631 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:43.631 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.892 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.153 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:44.413 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:44.413 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:44.413 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:44.413 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:44.413 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:44.674 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:44.675 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.675 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.675 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:44.675 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.675 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.675 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.936 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:45.196 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.196 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.196 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:45.196 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.196 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:45.197 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.458 09:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.458 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.719 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.980 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.981 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:45.981 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:45.981 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:46.241 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.501 09:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.501 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.764 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.025 rmmod nvme_tcp 00:32:47.025 rmmod nvme_fabrics 00:32:47.025 rmmod nvme_keyring 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3564362 ']' 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3564362 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' -z 3564362 ']' 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # kill -0 3564362 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # uname 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3564362 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3564362' 00:32:47.025 killing process with pid 3564362 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # kill 3564362 00:32:47.025 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@977 -- # wait 3564362 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.286 09:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.196 09:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.196 00:32:49.196 real 0m49.667s 00:32:49.196 user 2m56.552s 00:32:49.196 sys 0m21.387s 00:32:49.196 09:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # xtrace_disable 00:32:49.196 09:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:49.196 ************************************ 00:32:49.196 END TEST nvmf_ns_hotplug_stress 00:32:49.196 ************************************ 00:32:49.457 09:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:49.457 09:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:32:49.457 09:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:32:49.457 09:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:49.457 ************************************ 00:32:49.457 START TEST nvmf_delete_subsystem 00:32:49.457 ************************************ 00:32:49.457 09:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:49.457 * Looking for test storage... 00:32:49.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:49.457 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:32:49.457 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1626 -- # lcov --version 00:32:49.457 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.718 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:32:49.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.719 --rc genhtml_branch_coverage=1 00:32:49.719 --rc genhtml_function_coverage=1 00:32:49.719 --rc genhtml_legend=1 00:32:49.719 --rc geninfo_all_blocks=1 00:32:49.719 --rc geninfo_unexecuted_blocks=1 00:32:49.719 00:32:49.719 ' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:32:49.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.719 --rc genhtml_branch_coverage=1 00:32:49.719 --rc genhtml_function_coverage=1 00:32:49.719 --rc genhtml_legend=1 00:32:49.719 --rc geninfo_all_blocks=1 00:32:49.719 --rc geninfo_unexecuted_blocks=1 00:32:49.719 00:32:49.719 ' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:32:49.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.719 --rc genhtml_branch_coverage=1 00:32:49.719 --rc genhtml_function_coverage=1 00:32:49.719 --rc genhtml_legend=1 00:32:49.719 --rc geninfo_all_blocks=1 00:32:49.719 --rc geninfo_unexecuted_blocks=1 00:32:49.719 00:32:49.719 ' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:32:49.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.719 --rc genhtml_branch_coverage=1 00:32:49.719 --rc genhtml_function_coverage=1 00:32:49.719 --rc genhtml_legend=1 00:32:49.719 --rc geninfo_all_blocks=1 00:32:49.719 --rc geninfo_unexecuted_blocks=1 00:32:49.719 00:32:49.719 ' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.719 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.720 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:49.720 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:49.720 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.720 09:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:57.864 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:57.864 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:57.864 Found net devices under 0000:31:00.0: cvl_0_0 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:57.864 Found net devices under 0000:31:00.1: cvl_0_1 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.864 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:32:57.865 00:32:57.865 --- 10.0.0.2 ping statistics --- 00:32:57.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.865 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:32:57.865 00:32:57.865 --- 10.0.0.1 ping statistics --- 00:32:57.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.865 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3576276 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3576276 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # '[' -z 3576276 ']' 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local max_retries=100 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@843 -- # xtrace_disable 00:32:57.865 09:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:57.865 [2024-10-07 09:53:56.941533] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:57.865 [2024-10-07 09:53:56.942701] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:32:57.865 [2024-10-07 09:53:56.942755] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.865 [2024-10-07 09:53:57.035108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:57.865 [2024-10-07 09:53:57.129860] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.865 [2024-10-07 09:53:57.129926] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.865 [2024-10-07 09:53:57.129934] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.865 [2024-10-07 09:53:57.129942] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.865 [2024-10-07 09:53:57.129948] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.865 [2024-10-07 09:53:57.131175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.865 [2024-10-07 09:53:57.131177] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.865 [2024-10-07 09:53:57.207303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:57.865 [2024-10-07 09:53:57.207896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:57.865 [2024-10-07 09:53:57.208216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:58.127 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:32:58.127 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@867 -- # return 0 00:32:58.127 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:58.127 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@733 -- # xtrace_disable 00:32:58.127 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:58.388 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.388 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:58.388 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:32:58.388 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:58.388 [2024-10-07 09:53:57.820230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.388 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:32:58.388 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:58.388 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:58.389 [2024-10-07 09:53:57.864677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:58.389 NULL1 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:58.389 Delay0 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3576418 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:58.389 09:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:58.389 [2024-10-07 09:53:57.975644] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:00.305 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:00.305 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:33:00.305 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 starting I/O failed: -6 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 starting I/O failed: -6 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 starting I/O failed: -6 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 starting I/O failed: -6 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 starting I/O failed: -6 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 starting I/O failed: -6 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 starting I/O failed: -6 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 Write completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.567 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 starting I/O failed: -6 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 [2024-10-07 09:54:00.139537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2422fd0 is same with the state(6) to be set 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 starting I/O failed: -6 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 [2024-10-07 09:54:00.144399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd52000d450 is same with the state(6) to be set 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Write completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:00.568 Read completed with error (sct=0, sc=8) 00:33:01.513 [2024-10-07 09:54:01.115796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24246b0 is same with the state(6) to be set 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 [2024-10-07 09:54:01.143461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24231b0 is same with the state(6) to be set 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 [2024-10-07 09:54:01.144019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24236c0 is same with the state(6) to be set 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 [2024-10-07 09:54:01.146735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd52000cfe0 is same with the state(6) to be set 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Write completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 Read completed with error (sct=0, sc=8) 00:33:01.513 [2024-10-07 09:54:01.146807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd52000d780 is same with the state(6) to be set 00:33:01.513 Initializing NVMe Controllers 00:33:01.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:01.513 Controller IO queue size 128, less than required. 00:33:01.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:01.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:01.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:01.513 Initialization complete. Launching workers. 00:33:01.513 ======================================================== 00:33:01.514 Latency(us) 00:33:01.514 Device Information : IOPS MiB/s Average min max 00:33:01.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 178.23 0.09 917213.93 480.68 1043980.69 00:33:01.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.31 0.08 923603.33 334.17 1011692.45 00:33:01.514 ======================================================== 00:33:01.514 Total : 336.54 0.16 920219.59 334.17 1043980.69 00:33:01.514 00:33:01.514 [2024-10-07 09:54:01.147351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24246b0 (9): Bad file descriptor 00:33:01.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:01.514 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:33:01.514 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:01.514 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3576418 00:33:01.514 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3576418 00:33:02.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3576418) - No such process 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3576418 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # local es=0 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # valid_exec_arg wait 3576418 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # local arg=wait 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # type -t wait 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@656 -- # wait 3576418 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@656 -- # es=1 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.086 [2024-10-07 09:54:01.680656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@564 -- # xtrace_disable 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3577187 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3577187 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:02.086 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:02.346 [2024-10-07 09:54:01.766758] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:02.607 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:02.607 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3577187 00:33:02.607 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:03.240 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:03.240 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3577187 00:33:03.240 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:03.853 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:03.853 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3577187 00:33:03.853 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:04.114 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:04.114 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3577187 00:33:04.114 09:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:04.686 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:04.686 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3577187 00:33:04.686 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:05.260 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:05.260 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3577187 00:33:05.260 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:05.522 Initializing NVMe Controllers 00:33:05.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:05.522 Controller IO queue size 128, less than required. 00:33:05.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:05.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:05.522 Initialization complete. Launching workers. 00:33:05.522 ======================================================== 00:33:05.522 Latency(us) 00:33:05.522 Device Information : IOPS MiB/s Average min max 00:33:05.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002814.84 1000184.51 1041269.37 00:33:05.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004197.02 1000259.57 1011604.69 00:33:05.522 ======================================================== 00:33:05.522 Total : 256.00 0.12 1003505.93 1000184.51 1041269.37 00:33:05.522 00:33:05.522 [2024-10-07 09:54:04.931698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb74e30 is same with the state(6) to be set 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3577187 00:33:05.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3577187) - No such process 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3577187 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.783 rmmod nvme_tcp 00:33:05.783 rmmod nvme_fabrics 00:33:05.783 rmmod nvme_keyring 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3576276 ']' 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3576276 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' -z 3576276 ']' 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # kill -0 3576276 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # uname 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3576276 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3576276' 00:33:05.783 killing process with pid 3576276 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # kill 3576276 00:33:05.783 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@977 -- # wait 3576276 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.141 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.058 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.058 00:33:08.058 real 0m18.638s 00:33:08.058 user 0m26.544s 00:33:08.058 sys 0m7.800s 00:33:08.058 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # xtrace_disable 00:33:08.058 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:08.058 ************************************ 00:33:08.058 END TEST nvmf_delete_subsystem 00:33:08.058 ************************************ 00:33:08.058 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:08.058 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:33:08.058 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:33:08.058 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:08.058 ************************************ 00:33:08.058 START TEST nvmf_host_management 00:33:08.058 ************************************ 00:33:08.058 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:08.320 * Looking for test storage... 00:33:08.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1626 -- # lcov --version 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:33:08.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.320 --rc genhtml_branch_coverage=1 00:33:08.320 --rc genhtml_function_coverage=1 00:33:08.320 --rc genhtml_legend=1 00:33:08.320 --rc geninfo_all_blocks=1 00:33:08.320 --rc geninfo_unexecuted_blocks=1 00:33:08.320 00:33:08.320 ' 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:33:08.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.320 --rc genhtml_branch_coverage=1 00:33:08.320 --rc genhtml_function_coverage=1 00:33:08.320 --rc genhtml_legend=1 00:33:08.320 --rc geninfo_all_blocks=1 00:33:08.320 --rc geninfo_unexecuted_blocks=1 00:33:08.320 00:33:08.320 ' 00:33:08.320 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:33:08.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.320 --rc genhtml_branch_coverage=1 00:33:08.320 --rc genhtml_function_coverage=1 00:33:08.320 --rc genhtml_legend=1 00:33:08.320 --rc geninfo_all_blocks=1 00:33:08.320 --rc geninfo_unexecuted_blocks=1 00:33:08.320 00:33:08.320 ' 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:33:08.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.321 --rc genhtml_branch_coverage=1 00:33:08.321 --rc genhtml_function_coverage=1 00:33:08.321 --rc genhtml_legend=1 00:33:08.321 --rc geninfo_all_blocks=1 00:33:08.321 --rc geninfo_unexecuted_blocks=1 00:33:08.321 00:33:08.321 ' 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.321 09:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:16.467 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:16.467 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:16.467 Found net devices under 0000:31:00.0: cvl_0_0 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:16.467 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:16.468 Found net devices under 0000:31:00.1: cvl_0_1 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:16.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:33:16.468 00:33:16.468 --- 10.0.0.2 ping statistics --- 00:33:16.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.468 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:33:16.468 00:33:16.468 --- 10.0.0.1 ping statistics --- 00:33:16.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.468 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3582718 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3582718 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@834 -- # '[' -z 3582718 ']' 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local max_retries=100 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@843 -- # xtrace_disable 00:33:16.468 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.468 [2024-10-07 09:54:15.773956] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:16.468 [2024-10-07 09:54:15.775087] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:33:16.468 [2024-10-07 09:54:15.775138] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.468 [2024-10-07 09:54:15.865778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:16.468 [2024-10-07 09:54:15.959567] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.468 [2024-10-07 09:54:15.959631] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.468 [2024-10-07 09:54:15.959640] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.468 [2024-10-07 09:54:15.959647] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.468 [2024-10-07 09:54:15.959654] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.468 [2024-10-07 09:54:15.961697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.468 [2024-10-07 09:54:15.961937] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:33:16.468 [2024-10-07 09:54:15.961938] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.469 [2024-10-07 09:54:15.961775] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.469 [2024-10-07 09:54:16.047900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:16.469 [2024-10-07 09:54:16.048859] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:16.469 [2024-10-07 09:54:16.049112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:16.469 [2024-10-07 09:54:16.049497] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:16.469 [2024-10-07 09:54:16.049564] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@867 -- # return 0 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@733 -- # xtrace_disable 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.044 [2024-10-07 09:54:16.635060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:33:17.044 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.044 Malloc0 00:33:17.306 [2024-10-07 09:54:16.727301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@733 -- # xtrace_disable 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3582935 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3582935 /var/tmp/bdevperf.sock 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@834 -- # '[' -z 3582935 ']' 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local max_retries=100 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:17.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@843 -- # xtrace_disable 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:17.306 { 00:33:17.306 "params": { 00:33:17.306 "name": "Nvme$subsystem", 00:33:17.306 "trtype": "$TEST_TRANSPORT", 00:33:17.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.306 "adrfam": "ipv4", 00:33:17.306 "trsvcid": "$NVMF_PORT", 00:33:17.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.306 "hdgst": ${hdgst:-false}, 00:33:17.306 "ddgst": ${ddgst:-false} 00:33:17.306 }, 00:33:17.306 "method": "bdev_nvme_attach_controller" 00:33:17.306 } 00:33:17.306 EOF 00:33:17.306 )") 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:33:17.306 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:17.306 "params": { 00:33:17.306 "name": "Nvme0", 00:33:17.306 "trtype": "tcp", 00:33:17.306 "traddr": "10.0.0.2", 00:33:17.306 "adrfam": "ipv4", 00:33:17.306 "trsvcid": "4420", 00:33:17.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.306 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.306 "hdgst": false, 00:33:17.306 "ddgst": false 00:33:17.306 }, 00:33:17.306 "method": "bdev_nvme_attach_controller" 00:33:17.306 }' 00:33:17.306 [2024-10-07 09:54:16.837841] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:33:17.306 [2024-10-07 09:54:16.837914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582935 ] 00:33:17.306 [2024-10-07 09:54:16.922379] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.568 [2024-10-07 09:54:17.019891] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.568 Running I/O for 10 seconds... 00:33:18.143 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@867 -- # return 0 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=728 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 728 -ge 100 ']' 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.144 [2024-10-07 09:54:17.746718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174db40 is same with the state(6) to be set 00:33:18.144 [2024-10-07 09:54:17.746784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174db40 is same with the state(6) to be set 00:33:18.144 [2024-10-07 09:54:17.746795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174db40 is same with the state(6) to be set 00:33:18.144 [2024-10-07 09:54:17.746803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174db40 is same with the state(6) to be set 00:33:18.144 [2024-10-07 09:54:17.746811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174db40 is same with the state(6) to be set 00:33:18.144 [2024-10-07 09:54:17.750501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.144 [2024-10-07 09:54:17.750560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.750572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.144 [2024-10-07 09:54:17.750580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.750590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.144 [2024-10-07 09:54:17.750598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.750606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.144 [2024-10-07 09:54:17.750633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.750641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d2a0 is same with the state(6) to be set 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@564 -- # xtrace_disable 00:33:18.144 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.144 [2024-10-07 09:54:17.757244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.144 [2024-10-07 09:54:17.757605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.144 [2024-10-07 09:54:17.757624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.757983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.757991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.145 [2024-10-07 09:54:17.758340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.145 [2024-10-07 09:54:17.758351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.146 [2024-10-07 09:54:17.758359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.146 [2024-10-07 09:54:17.758369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.146 [2024-10-07 09:54:17.758377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.146 [2024-10-07 09:54:17.758387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.146 [2024-10-07 09:54:17.758397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.146 [2024-10-07 09:54:17.758407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.146 [2024-10-07 09:54:17.758415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.146 [2024-10-07 09:54:17.758424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.146 [2024-10-07 09:54:17.758432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.146 [2024-10-07 09:54:17.758443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.146 [2024-10-07 09:54:17.758451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.146 [2024-10-07 09:54:17.758461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.146 [2024-10-07 09:54:17.758468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.146 [2024-10-07 09:54:17.758554] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1175f60 was disconnected and freed. reset controller. 00:33:18.146 [2024-10-07 09:54:17.759775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:18.146 task offset: 109184 on job bdev=Nvme0n1 fails 00:33:18.146 00:33:18.146 Latency(us) 00:33:18.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.146 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:18.146 Job: Nvme0n1 ended in about 0.53 seconds with error 00:33:18.146 Verification LBA range: start 0x0 length 0x400 00:33:18.146 Nvme0n1 : 0.53 1566.49 97.91 120.07 0.00 36966.76 1761.28 37792.43 00:33:18.146 =================================================================================================================== 00:33:18.146 Total : 1566.49 97.91 120.07 0.00 36966.76 1761.28 37792.43 00:33:18.146 [2024-10-07 09:54:17.761974] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:18.146 [2024-10-07 09:54:17.762011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5d2a0 (9): Bad file descriptor 00:33:18.146 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:33:18.146 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:18.408 [2024-10-07 09:54:17.815357] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3582935 00:33:19.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3582935) - No such process 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:19.354 { 00:33:19.354 "params": { 00:33:19.354 "name": "Nvme$subsystem", 00:33:19.354 "trtype": "$TEST_TRANSPORT", 00:33:19.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:19.354 "adrfam": "ipv4", 00:33:19.354 "trsvcid": "$NVMF_PORT", 00:33:19.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:19.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:19.354 "hdgst": ${hdgst:-false}, 00:33:19.354 "ddgst": ${ddgst:-false} 00:33:19.354 }, 00:33:19.354 "method": "bdev_nvme_attach_controller" 00:33:19.354 } 00:33:19.354 EOF 00:33:19.354 )") 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:33:19.354 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:19.354 "params": { 00:33:19.354 "name": "Nvme0", 00:33:19.354 "trtype": "tcp", 00:33:19.354 "traddr": "10.0.0.2", 00:33:19.354 "adrfam": "ipv4", 00:33:19.354 "trsvcid": "4420", 00:33:19.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:19.354 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:19.354 "hdgst": false, 00:33:19.354 "ddgst": false 00:33:19.354 }, 00:33:19.354 "method": "bdev_nvme_attach_controller" 00:33:19.354 }' 00:33:19.354 [2024-10-07 09:54:18.838226] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:33:19.354 [2024-10-07 09:54:18.838299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583344 ] 00:33:19.354 [2024-10-07 09:54:18.923067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.616 [2024-10-07 09:54:19.019326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.877 Running I/O for 1 seconds... 00:33:20.818 1483.00 IOPS, 92.69 MiB/s 00:33:20.818 Latency(us) 00:33:20.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.818 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:20.818 Verification LBA range: start 0x0 length 0x400 00:33:20.818 Nvme0n1 : 1.05 1468.32 91.77 0.00 0.00 41243.42 2211.84 44127.57 00:33:20.818 =================================================================================================================== 00:33:20.818 Total : 1468.32 91.77 0.00 0.00 41243.42 2211.84 44127.57 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.080 rmmod nvme_tcp 00:33:21.080 rmmod nvme_fabrics 00:33:21.080 rmmod nvme_keyring 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3582718 ']' 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3582718 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' -z 3582718 ']' 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # kill -0 3582718 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # uname 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3582718 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3582718' 00:33:21.080 killing process with pid 3582718 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # kill 3582718 00:33:21.080 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@977 -- # wait 3582718 00:33:21.341 [2024-10-07 09:54:20.808909] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.342 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.257 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:23.257 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:23.257 00:33:23.257 real 0m15.261s 00:33:23.257 user 0m20.766s 00:33:23.257 sys 0m7.763s 00:33:23.257 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # xtrace_disable 00:33:23.257 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:23.257 ************************************ 00:33:23.257 END TEST nvmf_host_management 00:33:23.257 ************************************ 00:33:23.518 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:23.518 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:33:23.518 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:33:23.518 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:23.518 ************************************ 00:33:23.518 START TEST nvmf_lvol 00:33:23.518 ************************************ 00:33:23.518 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:23.518 * Looking for test storage... 00:33:23.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:23.518 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:33:23.518 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1626 -- # lcov --version 00:33:23.518 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:33:23.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.780 --rc genhtml_branch_coverage=1 00:33:23.780 --rc genhtml_function_coverage=1 00:33:23.780 --rc genhtml_legend=1 00:33:23.780 --rc geninfo_all_blocks=1 00:33:23.780 --rc geninfo_unexecuted_blocks=1 00:33:23.780 00:33:23.780 ' 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:33:23.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.780 --rc genhtml_branch_coverage=1 00:33:23.780 --rc genhtml_function_coverage=1 00:33:23.780 --rc genhtml_legend=1 00:33:23.780 --rc geninfo_all_blocks=1 00:33:23.780 --rc geninfo_unexecuted_blocks=1 00:33:23.780 00:33:23.780 ' 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:33:23.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.780 --rc genhtml_branch_coverage=1 00:33:23.780 --rc genhtml_function_coverage=1 00:33:23.780 --rc genhtml_legend=1 00:33:23.780 --rc geninfo_all_blocks=1 00:33:23.780 --rc geninfo_unexecuted_blocks=1 00:33:23.780 00:33:23.780 ' 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:33:23.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.780 --rc genhtml_branch_coverage=1 00:33:23.780 --rc genhtml_function_coverage=1 00:33:23.780 --rc genhtml_legend=1 00:33:23.780 --rc geninfo_all_blocks=1 00:33:23.780 --rc geninfo_unexecuted_blocks=1 00:33:23.780 00:33:23.780 ' 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.780 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.781 09:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:31.928 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.928 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.928 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.928 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.928 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.928 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.928 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:31.929 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:31.929 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:31.929 Found net devices under 0000:31:00.0: cvl_0_0 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:31.929 Found net devices under 0000:31:00.1: cvl_0_1 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.929 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.744 ms 00:33:31.930 00:33:31.930 --- 10.0.0.2 ping statistics --- 00:33:31.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.930 rtt min/avg/max/mdev = 0.744/0.744/0.744/0.000 ms 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:33:31.930 00:33:31.930 --- 10.0.0.1 ping statistics --- 00:33:31.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.930 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3587856 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3587856 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@834 -- # '[' -z 3587856 ']' 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local max_retries=100 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@843 -- # xtrace_disable 00:33:31.930 09:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:31.930 [2024-10-07 09:54:30.998868] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:31.930 [2024-10-07 09:54:31.000001] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:33:31.930 [2024-10-07 09:54:31.000046] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.930 [2024-10-07 09:54:31.091967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:31.930 [2024-10-07 09:54:31.186987] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.930 [2024-10-07 09:54:31.187051] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.930 [2024-10-07 09:54:31.187061] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.930 [2024-10-07 09:54:31.187069] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.930 [2024-10-07 09:54:31.187075] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.930 [2024-10-07 09:54:31.188599] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.930 [2024-10-07 09:54:31.188764] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:31.930 [2024-10-07 09:54:31.188911] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.930 [2024-10-07 09:54:31.282120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:31.930 [2024-10-07 09:54:31.283076] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:31.930 [2024-10-07 09:54:31.283232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:31.930 [2024-10-07 09:54:31.283449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:32.192 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:33:32.192 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@867 -- # return 0 00:33:32.192 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:32.192 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@733 -- # xtrace_disable 00:33:32.192 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:32.454 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.454 09:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:32.454 [2024-10-07 09:54:32.037892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.454 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:32.715 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:32.715 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:32.977 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:32.977 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:33.238 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:33.499 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=806518e4-2e7b-4413-a33f-455eb58afc1a 00:33:33.499 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 806518e4-2e7b-4413-a33f-455eb58afc1a lvol 20 00:33:33.499 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0179cb65-8240-4494-a8a2-c7cc1c5ff4a0 00:33:33.499 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:33.761 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0179cb65-8240-4494-a8a2-c7cc1c5ff4a0 00:33:34.022 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:34.023 [2024-10-07 09:54:33.605842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.023 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:34.285 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3588550 00:33:34.285 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:34.285 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:35.230 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0179cb65-8240-4494-a8a2-c7cc1c5ff4a0 MY_SNAPSHOT 00:33:35.492 09:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ba364c29-3c06-4a14-a968-93f8af692e6d 00:33:35.492 09:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0179cb65-8240-4494-a8a2-c7cc1c5ff4a0 30 00:33:35.754 09:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ba364c29-3c06-4a14-a968-93f8af692e6d MY_CLONE 00:33:36.015 09:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1ba95893-aaf6-41b3-92b2-2d1d7604b551 00:33:36.015 09:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1ba95893-aaf6-41b3-92b2-2d1d7604b551 00:33:36.587 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3588550 00:33:44.731 Initializing NVMe Controllers 00:33:44.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:44.731 Controller IO queue size 128, less than required. 00:33:44.731 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:44.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:44.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:44.731 Initialization complete. Launching workers. 00:33:44.731 ======================================================== 00:33:44.731 Latency(us) 00:33:44.731 Device Information : IOPS MiB/s Average min max 00:33:44.731 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15229.60 59.49 8407.31 1806.92 67005.68 00:33:44.731 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15090.20 58.95 8482.78 1680.04 58440.74 00:33:44.731 ======================================================== 00:33:44.732 Total : 30319.80 118.44 8444.87 1680.04 67005.68 00:33:44.732 00:33:44.732 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:45.037 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0179cb65-8240-4494-a8a2-c7cc1c5ff4a0 00:33:45.298 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 806518e4-2e7b-4413-a33f-455eb58afc1a 00:33:45.298 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:45.298 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:45.298 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:45.299 rmmod nvme_tcp 00:33:45.299 rmmod nvme_fabrics 00:33:45.299 rmmod nvme_keyring 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3587856 ']' 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3587856 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' -z 3587856 ']' 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # kill -0 3587856 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # uname 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:33:45.299 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3587856 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3587856' 00:33:45.560 killing process with pid 3587856 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # kill 3587856 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@977 -- # wait 3587856 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.560 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:48.111 00:33:48.111 real 0m24.253s 00:33:48.111 user 0m56.447s 00:33:48.111 sys 0m11.085s 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # xtrace_disable 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:48.111 ************************************ 00:33:48.111 END TEST nvmf_lvol 00:33:48.111 ************************************ 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:48.111 ************************************ 00:33:48.111 START TEST nvmf_lvs_grow 00:33:48.111 ************************************ 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:48.111 * Looking for test storage... 00:33:48.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1626 -- # lcov --version 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:33:48.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.111 --rc genhtml_branch_coverage=1 00:33:48.111 --rc genhtml_function_coverage=1 00:33:48.111 --rc genhtml_legend=1 00:33:48.111 --rc geninfo_all_blocks=1 00:33:48.111 --rc geninfo_unexecuted_blocks=1 00:33:48.111 00:33:48.111 ' 00:33:48.111 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:33:48.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.111 --rc genhtml_branch_coverage=1 00:33:48.111 --rc genhtml_function_coverage=1 00:33:48.111 --rc genhtml_legend=1 00:33:48.111 --rc geninfo_all_blocks=1 00:33:48.111 --rc geninfo_unexecuted_blocks=1 00:33:48.111 00:33:48.111 ' 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:33:48.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.112 --rc genhtml_branch_coverage=1 00:33:48.112 --rc genhtml_function_coverage=1 00:33:48.112 --rc genhtml_legend=1 00:33:48.112 --rc geninfo_all_blocks=1 00:33:48.112 --rc geninfo_unexecuted_blocks=1 00:33:48.112 00:33:48.112 ' 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:33:48.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.112 --rc genhtml_branch_coverage=1 00:33:48.112 --rc genhtml_function_coverage=1 00:33:48.112 --rc genhtml_legend=1 00:33:48.112 --rc geninfo_all_blocks=1 00:33:48.112 --rc geninfo_unexecuted_blocks=1 00:33:48.112 00:33:48.112 ' 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:48.112 09:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:56.255 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:56.255 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:56.255 Found net devices under 0000:31:00.0: cvl_0_0 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:56.255 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:56.256 Found net devices under 0000:31:00.1: cvl_0_1 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:56.256 09:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:56.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:56.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:33:56.256 00:33:56.256 --- 10.0.0.2 ping statistics --- 00:33:56.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.256 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:56.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:56.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:33:56.256 00:33:56.256 --- 10.0.0.1 ping statistics --- 00:33:56.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.256 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3594975 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3594975 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # '[' -z 3594975 ']' 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local max_retries=100 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@843 -- # xtrace_disable 00:33:56.256 09:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:56.256 [2024-10-07 09:54:55.336069] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:56.256 [2024-10-07 09:54:55.337224] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:33:56.256 [2024-10-07 09:54:55.337277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.256 [2024-10-07 09:54:55.425394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.256 [2024-10-07 09:54:55.519233] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.256 [2024-10-07 09:54:55.519293] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.256 [2024-10-07 09:54:55.519302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:56.256 [2024-10-07 09:54:55.519309] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:56.256 [2024-10-07 09:54:55.519316] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.256 [2024-10-07 09:54:55.520151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.256 [2024-10-07 09:54:55.596121] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:56.256 [2024-10-07 09:54:55.596408] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:56.518 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:33:56.518 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@867 -- # return 0 00:33:56.518 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:56.518 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@733 -- # xtrace_disable 00:33:56.518 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:56.779 [2024-10-07 09:54:56.357076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1110 -- # xtrace_disable 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:56.779 ************************************ 00:33:56.779 START TEST lvs_grow_clean 00:33:56.779 ************************************ 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # lvs_grow 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:56.779 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:56.780 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:56.780 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:57.041 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:57.041 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:57.302 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7e33fada-dc67-4fa5-b78f-f2557629228b 00:33:57.302 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:33:57.302 09:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:57.563 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:57.563 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:57.563 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e33fada-dc67-4fa5-b78f-f2557629228b lvol 150 00:33:57.563 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1070bcaa-8a9e-42f1-ac46-a98ff24c3596 00:33:57.563 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:57.563 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:57.824 [2024-10-07 09:54:57.364749] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:57.824 [2024-10-07 09:54:57.364907] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:57.824 true 00:33:57.824 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:33:57.824 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:58.085 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:58.085 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:58.346 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1070bcaa-8a9e-42f1-ac46-a98ff24c3596 00:33:58.346 09:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:58.606 [2024-10-07 09:54:58.101405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.606 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:58.866 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:58.866 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3595411 00:33:58.866 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:58.866 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3595411 /var/tmp/bdevperf.sock 00:33:58.866 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # '[' -z 3595411 ']' 00:33:58.866 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:58.866 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local max_retries=100 00:33:58.866 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:58.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:58.866 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@843 -- # xtrace_disable 00:33:58.866 09:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:58.866 [2024-10-07 09:54:58.335639] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:33:58.866 [2024-10-07 09:54:58.335713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595411 ] 00:33:58.866 [2024-10-07 09:54:58.421693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.866 [2024-10-07 09:54:58.516795] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.808 09:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:33:59.808 09:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@867 -- # return 0 00:33:59.808 09:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:59.808 Nvme0n1 00:33:59.808 09:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:00.069 [ 00:34:00.069 { 00:34:00.069 "name": "Nvme0n1", 00:34:00.069 "aliases": [ 00:34:00.069 "1070bcaa-8a9e-42f1-ac46-a98ff24c3596" 00:34:00.069 ], 00:34:00.069 "product_name": "NVMe disk", 00:34:00.069 "block_size": 4096, 00:34:00.069 "num_blocks": 38912, 00:34:00.069 "uuid": "1070bcaa-8a9e-42f1-ac46-a98ff24c3596", 00:34:00.069 "numa_id": 0, 00:34:00.069 "assigned_rate_limits": { 00:34:00.069 "rw_ios_per_sec": 0, 00:34:00.069 "rw_mbytes_per_sec": 0, 00:34:00.069 "r_mbytes_per_sec": 0, 00:34:00.069 "w_mbytes_per_sec": 0 00:34:00.069 }, 00:34:00.069 "claimed": false, 00:34:00.069 "zoned": false, 00:34:00.069 "supported_io_types": { 00:34:00.069 "read": true, 00:34:00.069 "write": true, 00:34:00.069 "unmap": true, 00:34:00.069 "flush": true, 00:34:00.069 "reset": true, 00:34:00.069 "nvme_admin": true, 00:34:00.069 "nvme_io": true, 00:34:00.069 "nvme_io_md": false, 00:34:00.069 "write_zeroes": true, 00:34:00.069 "zcopy": false, 00:34:00.069 "get_zone_info": false, 00:34:00.069 "zone_management": false, 00:34:00.069 "zone_append": false, 00:34:00.069 "compare": true, 00:34:00.069 "compare_and_write": true, 00:34:00.069 "abort": true, 00:34:00.069 "seek_hole": false, 00:34:00.069 "seek_data": false, 00:34:00.069 "copy": true, 00:34:00.069 "nvme_iov_md": false 00:34:00.069 }, 00:34:00.069 "memory_domains": [ 00:34:00.069 { 00:34:00.069 "dma_device_id": "system", 00:34:00.069 "dma_device_type": 1 00:34:00.069 } 00:34:00.069 ], 00:34:00.069 "driver_specific": { 00:34:00.069 "nvme": [ 00:34:00.069 { 00:34:00.069 "trid": { 00:34:00.069 "trtype": "TCP", 00:34:00.069 "adrfam": "IPv4", 00:34:00.069 "traddr": "10.0.0.2", 00:34:00.069 "trsvcid": "4420", 00:34:00.069 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:00.069 }, 00:34:00.069 "ctrlr_data": { 00:34:00.069 "cntlid": 1, 00:34:00.069 "vendor_id": "0x8086", 00:34:00.069 "model_number": "SPDK bdev Controller", 00:34:00.069 "serial_number": "SPDK0", 00:34:00.069 "firmware_revision": "25.01", 00:34:00.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:00.069 "oacs": { 00:34:00.069 "security": 0, 00:34:00.069 "format": 0, 00:34:00.069 "firmware": 0, 00:34:00.069 "ns_manage": 0 00:34:00.069 }, 00:34:00.069 "multi_ctrlr": true, 00:34:00.069 "ana_reporting": false 00:34:00.069 }, 00:34:00.069 "vs": { 00:34:00.069 "nvme_version": "1.3" 00:34:00.069 }, 00:34:00.069 "ns_data": { 00:34:00.069 "id": 1, 00:34:00.069 "can_share": true 00:34:00.069 } 00:34:00.069 } 00:34:00.069 ], 00:34:00.069 "mp_policy": "active_passive" 00:34:00.069 } 00:34:00.069 } 00:34:00.069 ] 00:34:00.069 09:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3595704 00:34:00.069 09:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:00.069 09:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:00.069 Running I/O for 10 seconds... 00:34:01.457 Latency(us) 00:34:01.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:01.457 Nvme0n1 : 1.00 16734.00 65.37 0.00 0.00 0.00 0.00 0.00 00:34:01.457 =================================================================================================================== 00:34:01.457 Total : 16734.00 65.37 0.00 0.00 0.00 0.00 0.00 00:34:01.457 00:34:02.028 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:34:02.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:02.289 Nvme0n1 : 2.00 17054.00 66.62 0.00 0.00 0.00 0.00 0.00 00:34:02.289 =================================================================================================================== 00:34:02.289 Total : 17054.00 66.62 0.00 0.00 0.00 0.00 0.00 00:34:02.289 00:34:02.289 true 00:34:02.289 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:34:02.289 09:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:02.549 09:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:02.549 09:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:02.549 09:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3595704 00:34:03.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:03.122 Nvme0n1 : 3.00 17278.33 67.49 0.00 0.00 0.00 0.00 0.00 00:34:03.122 =================================================================================================================== 00:34:03.122 Total : 17278.33 67.49 0.00 0.00 0.00 0.00 0.00 00:34:03.122 00:34:04.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:04.508 Nvme0n1 : 4.00 17487.00 68.31 0.00 0.00 0.00 0.00 0.00 00:34:04.508 =================================================================================================================== 00:34:04.508 Total : 17487.00 68.31 0.00 0.00 0.00 0.00 0.00 00:34:04.508 00:34:05.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:05.078 Nvme0n1 : 5.00 18712.60 73.10 0.00 0.00 0.00 0.00 0.00 00:34:05.078 =================================================================================================================== 00:34:05.078 Total : 18712.60 73.10 0.00 0.00 0.00 0.00 0.00 00:34:05.078 00:34:06.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:06.465 Nvme0n1 : 6.00 19860.33 77.58 0.00 0.00 0.00 0.00 0.00 00:34:06.465 =================================================================================================================== 00:34:06.465 Total : 19860.33 77.58 0.00 0.00 0.00 0.00 0.00 00:34:06.465 00:34:07.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:07.409 Nvme0n1 : 7.00 20680.14 80.78 0.00 0.00 0.00 0.00 0.00 00:34:07.409 =================================================================================================================== 00:34:07.409 Total : 20680.14 80.78 0.00 0.00 0.00 0.00 0.00 00:34:07.409 00:34:08.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:08.353 Nvme0n1 : 8.00 21302.75 83.21 0.00 0.00 0.00 0.00 0.00 00:34:08.353 =================================================================================================================== 00:34:08.353 Total : 21302.75 83.21 0.00 0.00 0.00 0.00 0.00 00:34:08.353 00:34:09.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:09.295 Nvme0n1 : 9.00 21787.22 85.11 0.00 0.00 0.00 0.00 0.00 00:34:09.295 =================================================================================================================== 00:34:09.295 Total : 21787.22 85.11 0.00 0.00 0.00 0.00 0.00 00:34:09.295 00:34:10.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:10.238 Nvme0n1 : 10.00 22171.90 86.61 0.00 0.00 0.00 0.00 0.00 00:34:10.238 =================================================================================================================== 00:34:10.238 Total : 22171.90 86.61 0.00 0.00 0.00 0.00 0.00 00:34:10.238 00:34:10.238 00:34:10.238 Latency(us) 00:34:10.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:10.238 Nvme0n1 : 10.01 22176.32 86.63 0.00 0.00 5768.65 3085.65 32112.64 00:34:10.238 =================================================================================================================== 00:34:10.238 Total : 22176.32 86.63 0.00 0.00 5768.65 3085.65 32112.64 00:34:10.238 { 00:34:10.238 "results": [ 00:34:10.238 { 00:34:10.238 "job": "Nvme0n1", 00:34:10.238 "core_mask": "0x2", 00:34:10.238 "workload": "randwrite", 00:34:10.238 "status": "finished", 00:34:10.238 "queue_depth": 128, 00:34:10.238 "io_size": 4096, 00:34:10.238 "runtime": 10.005178, 00:34:10.238 "iops": 22176.317103004065, 00:34:10.238 "mibps": 86.62623868360963, 00:34:10.238 "io_failed": 0, 00:34:10.238 "io_timeout": 0, 00:34:10.238 "avg_latency_us": 5768.647912095837, 00:34:10.238 "min_latency_us": 3085.653333333333, 00:34:10.238 "max_latency_us": 32112.64 00:34:10.238 } 00:34:10.238 ], 00:34:10.238 "core_count": 1 00:34:10.238 } 00:34:10.238 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3595411 00:34:10.238 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' -z 3595411 ']' 00:34:10.238 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # kill -0 3595411 00:34:10.238 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # uname 00:34:10.238 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:34:10.238 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3595411 00:34:10.238 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:34:10.238 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:34:10.238 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3595411' 00:34:10.238 killing process with pid 3595411 00:34:10.239 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # kill 3595411 00:34:10.239 Received shutdown signal, test time was about 10.000000 seconds 00:34:10.239 00:34:10.239 Latency(us) 00:34:10.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.239 =================================================================================================================== 00:34:10.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:10.239 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@977 -- # wait 3595411 00:34:10.500 09:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:10.500 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.762 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:34:10.762 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:11.023 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:11.023 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:11.023 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:11.023 [2024-10-07 09:55:10.628819] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:11.023 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:34:11.023 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # local es=0 00:34:11.023 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:34:11.023 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:11.023 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:34:11.023 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:11.023 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:34:11.024 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@647 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:11.024 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:34:11.024 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@647 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:11.024 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@647 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:11.024 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@656 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:34:11.285 request: 00:34:11.285 { 00:34:11.285 "uuid": "7e33fada-dc67-4fa5-b78f-f2557629228b", 00:34:11.285 "method": "bdev_lvol_get_lvstores", 00:34:11.285 "req_id": 1 00:34:11.285 } 00:34:11.285 Got JSON-RPC error response 00:34:11.285 response: 00:34:11.285 { 00:34:11.285 "code": -19, 00:34:11.285 "message": "No such device" 00:34:11.285 } 00:34:11.285 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@656 -- # es=1 00:34:11.285 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:34:11.285 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:34:11.285 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:34:11.285 09:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:11.546 aio_bdev 00:34:11.546 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1070bcaa-8a9e-42f1-ac46-a98ff24c3596 00:34:11.546 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_name=1070bcaa-8a9e-42f1-ac46-a98ff24c3596 00:34:11.546 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:34:11.546 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local i 00:34:11.546 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:34:11.546 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:34:11.546 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:11.546 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1070bcaa-8a9e-42f1-ac46-a98ff24c3596 -t 2000 00:34:11.808 [ 00:34:11.808 { 00:34:11.808 "name": "1070bcaa-8a9e-42f1-ac46-a98ff24c3596", 00:34:11.808 "aliases": [ 00:34:11.808 "lvs/lvol" 00:34:11.808 ], 00:34:11.808 "product_name": "Logical Volume", 00:34:11.808 "block_size": 4096, 00:34:11.808 "num_blocks": 38912, 00:34:11.808 "uuid": "1070bcaa-8a9e-42f1-ac46-a98ff24c3596", 00:34:11.808 "assigned_rate_limits": { 00:34:11.808 "rw_ios_per_sec": 0, 00:34:11.808 "rw_mbytes_per_sec": 0, 00:34:11.808 "r_mbytes_per_sec": 0, 00:34:11.808 "w_mbytes_per_sec": 0 00:34:11.808 }, 00:34:11.808 "claimed": false, 00:34:11.808 "zoned": false, 00:34:11.808 "supported_io_types": { 00:34:11.808 "read": true, 00:34:11.808 "write": true, 00:34:11.808 "unmap": true, 00:34:11.808 "flush": false, 00:34:11.808 "reset": true, 00:34:11.808 "nvme_admin": false, 00:34:11.808 "nvme_io": false, 00:34:11.808 "nvme_io_md": false, 00:34:11.808 "write_zeroes": true, 00:34:11.808 "zcopy": false, 00:34:11.808 "get_zone_info": false, 00:34:11.808 "zone_management": false, 00:34:11.808 "zone_append": false, 00:34:11.808 "compare": false, 00:34:11.808 "compare_and_write": false, 00:34:11.808 "abort": false, 00:34:11.808 "seek_hole": true, 00:34:11.808 "seek_data": true, 00:34:11.808 "copy": false, 00:34:11.808 "nvme_iov_md": false 00:34:11.808 }, 00:34:11.808 "driver_specific": { 00:34:11.808 "lvol": { 00:34:11.808 "lvol_store_uuid": "7e33fada-dc67-4fa5-b78f-f2557629228b", 00:34:11.808 "base_bdev": "aio_bdev", 00:34:11.808 "thin_provision": false, 00:34:11.808 "num_allocated_clusters": 38, 00:34:11.808 "snapshot": false, 00:34:11.808 "clone": false, 00:34:11.808 "esnap_clone": false 00:34:11.808 } 00:34:11.808 } 00:34:11.808 } 00:34:11.808 ] 00:34:11.808 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # return 0 00:34:11.808 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:34:11.808 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:12.069 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:12.069 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:34:12.069 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:12.069 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:12.069 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1070bcaa-8a9e-42f1-ac46-a98ff24c3596 00:34:12.330 09:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e33fada-dc67-4fa5-b78f-f2557629228b 00:34:12.591 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:12.852 00:34:12.852 real 0m15.879s 00:34:12.852 user 0m15.570s 00:34:12.852 sys 0m1.441s 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # xtrace_disable 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:12.852 ************************************ 00:34:12.852 END TEST lvs_grow_clean 00:34:12.852 ************************************ 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1110 -- # xtrace_disable 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:12.852 ************************************ 00:34:12.852 START TEST lvs_grow_dirty 00:34:12.852 ************************************ 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # lvs_grow dirty 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:12.852 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:13.114 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:13.114 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:13.375 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:13.375 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:13.375 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:13.375 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:13.375 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:13.375 09:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a294b70f-ed1f-42a1-bfd4-bfd475825848 lvol 150 00:34:13.636 09:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb 00:34:13.636 09:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:13.636 09:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:13.636 [2024-10-07 09:55:13.288721] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:13.636 [2024-10-07 09:55:13.288866] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:13.636 true 00:34:13.898 09:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:13.898 09:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:13.898 09:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:13.898 09:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:14.160 09:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb 00:34:14.422 09:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:14.422 [2024-10-07 09:55:13.989317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.422 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:14.682 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3598439 00:34:14.682 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:14.683 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:14.683 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3598439 /var/tmp/bdevperf.sock 00:34:14.683 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # '[' -z 3598439 ']' 00:34:14.683 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:14.683 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local max_retries=100 00:34:14.683 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:14.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:14.683 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@843 -- # xtrace_disable 00:34:14.683 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:14.683 [2024-10-07 09:55:14.210662] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:34:14.683 [2024-10-07 09:55:14.210718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598439 ] 00:34:14.683 [2024-10-07 09:55:14.287447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.683 [2024-10-07 09:55:14.341764] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.624 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:34:15.624 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@867 -- # return 0 00:34:15.624 09:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:15.624 Nvme0n1 00:34:15.624 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:15.885 [ 00:34:15.885 { 00:34:15.885 "name": "Nvme0n1", 00:34:15.885 "aliases": [ 00:34:15.885 "d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb" 00:34:15.885 ], 00:34:15.885 "product_name": "NVMe disk", 00:34:15.885 "block_size": 4096, 00:34:15.885 "num_blocks": 38912, 00:34:15.885 "uuid": "d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb", 00:34:15.885 "numa_id": 0, 00:34:15.885 "assigned_rate_limits": { 00:34:15.885 "rw_ios_per_sec": 0, 00:34:15.885 "rw_mbytes_per_sec": 0, 00:34:15.885 "r_mbytes_per_sec": 0, 00:34:15.885 "w_mbytes_per_sec": 0 00:34:15.885 }, 00:34:15.885 "claimed": false, 00:34:15.885 "zoned": false, 00:34:15.885 "supported_io_types": { 00:34:15.885 "read": true, 00:34:15.885 "write": true, 00:34:15.885 "unmap": true, 00:34:15.885 "flush": true, 00:34:15.885 "reset": true, 00:34:15.885 "nvme_admin": true, 00:34:15.885 "nvme_io": true, 00:34:15.885 "nvme_io_md": false, 00:34:15.885 "write_zeroes": true, 00:34:15.885 "zcopy": false, 00:34:15.885 "get_zone_info": false, 00:34:15.885 "zone_management": false, 00:34:15.885 "zone_append": false, 00:34:15.885 "compare": true, 00:34:15.885 "compare_and_write": true, 00:34:15.885 "abort": true, 00:34:15.885 "seek_hole": false, 00:34:15.885 "seek_data": false, 00:34:15.885 "copy": true, 00:34:15.885 "nvme_iov_md": false 00:34:15.885 }, 00:34:15.885 "memory_domains": [ 00:34:15.885 { 00:34:15.885 "dma_device_id": "system", 00:34:15.885 "dma_device_type": 1 00:34:15.885 } 00:34:15.885 ], 00:34:15.885 "driver_specific": { 00:34:15.885 "nvme": [ 00:34:15.885 { 00:34:15.885 "trid": { 00:34:15.885 "trtype": "TCP", 00:34:15.885 "adrfam": "IPv4", 00:34:15.885 "traddr": "10.0.0.2", 00:34:15.885 "trsvcid": "4420", 00:34:15.885 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:15.885 }, 00:34:15.885 "ctrlr_data": { 00:34:15.885 "cntlid": 1, 00:34:15.885 "vendor_id": "0x8086", 00:34:15.885 "model_number": "SPDK bdev Controller", 00:34:15.885 "serial_number": "SPDK0", 00:34:15.885 "firmware_revision": "25.01", 00:34:15.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:15.885 "oacs": { 00:34:15.885 "security": 0, 00:34:15.885 "format": 0, 00:34:15.885 "firmware": 0, 00:34:15.885 "ns_manage": 0 00:34:15.885 }, 00:34:15.885 "multi_ctrlr": true, 00:34:15.885 "ana_reporting": false 00:34:15.885 }, 00:34:15.885 "vs": { 00:34:15.885 "nvme_version": "1.3" 00:34:15.885 }, 00:34:15.885 "ns_data": { 00:34:15.885 "id": 1, 00:34:15.885 "can_share": true 00:34:15.885 } 00:34:15.885 } 00:34:15.885 ], 00:34:15.885 "mp_policy": "active_passive" 00:34:15.885 } 00:34:15.885 } 00:34:15.885 ] 00:34:15.886 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:15.886 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3598777 00:34:15.886 09:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:15.886 Running I/O for 10 seconds... 00:34:17.272 Latency(us) 00:34:17.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:17.272 Nvme0n1 : 1.00 17167.00 67.06 0.00 0.00 0.00 0.00 0.00 00:34:17.272 =================================================================================================================== 00:34:17.272 Total : 17167.00 67.06 0.00 0.00 0.00 0.00 0.00 00:34:17.272 00:34:17.843 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:17.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:17.843 Nvme0n1 : 2.00 17447.00 68.15 0.00 0.00 0.00 0.00 0.00 00:34:17.843 =================================================================================================================== 00:34:17.843 Total : 17447.00 68.15 0.00 0.00 0.00 0.00 0.00 00:34:17.843 00:34:18.103 true 00:34:18.103 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:18.103 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:18.364 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:18.364 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:18.364 09:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3598777 00:34:18.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:18.938 Nvme0n1 : 3.00 17496.00 68.34 0.00 0.00 0.00 0.00 0.00 00:34:18.938 =================================================================================================================== 00:34:18.938 Total : 17496.00 68.34 0.00 0.00 0.00 0.00 0.00 00:34:18.938 00:34:19.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:19.880 Nvme0n1 : 4.00 17752.75 69.35 0.00 0.00 0.00 0.00 0.00 00:34:19.880 =================================================================================================================== 00:34:19.880 Total : 17752.75 69.35 0.00 0.00 0.00 0.00 0.00 00:34:19.880 00:34:21.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:21.263 Nvme0n1 : 5.00 19183.60 74.94 0.00 0.00 0.00 0.00 0.00 00:34:21.263 =================================================================================================================== 00:34:21.263 Total : 19183.60 74.94 0.00 0.00 0.00 0.00 0.00 00:34:21.263 00:34:22.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:22.206 Nvme0n1 : 6.00 20149.83 78.71 0.00 0.00 0.00 0.00 0.00 00:34:22.206 =================================================================================================================== 00:34:22.206 Total : 20149.83 78.71 0.00 0.00 0.00 0.00 0.00 00:34:22.206 00:34:23.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:23.150 Nvme0n1 : 7.00 20818.71 81.32 0.00 0.00 0.00 0.00 0.00 00:34:23.150 =================================================================================================================== 00:34:23.150 Total : 20818.71 81.32 0.00 0.00 0.00 0.00 0.00 00:34:23.150 00:34:24.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.093 Nvme0n1 : 8.00 21323.50 83.29 0.00 0.00 0.00 0.00 0.00 00:34:24.093 =================================================================================================================== 00:34:24.093 Total : 21323.50 83.29 0.00 0.00 0.00 0.00 0.00 00:34:24.093 00:34:25.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:25.032 Nvme0n1 : 9.00 21723.78 84.86 0.00 0.00 0.00 0.00 0.00 00:34:25.032 =================================================================================================================== 00:34:25.032 Total : 21723.78 84.86 0.00 0.00 0.00 0.00 0.00 00:34:25.032 00:34:25.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:25.971 Nvme0n1 : 10.00 22041.10 86.10 0.00 0.00 0.00 0.00 0.00 00:34:25.971 =================================================================================================================== 00:34:25.971 Total : 22041.10 86.10 0.00 0.00 0.00 0.00 0.00 00:34:25.971 00:34:25.971 00:34:25.971 Latency(us) 00:34:25.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:25.971 Nvme0n1 : 10.00 22039.96 86.09 0.00 0.00 5804.51 3099.31 31238.83 00:34:25.971 =================================================================================================================== 00:34:25.971 Total : 22039.96 86.09 0.00 0.00 5804.51 3099.31 31238.83 00:34:25.971 { 00:34:25.971 "results": [ 00:34:25.971 { 00:34:25.971 "job": "Nvme0n1", 00:34:25.971 "core_mask": "0x2", 00:34:25.971 "workload": "randwrite", 00:34:25.971 "status": "finished", 00:34:25.971 "queue_depth": 128, 00:34:25.971 "io_size": 4096, 00:34:25.971 "runtime": 10.003376, 00:34:25.971 "iops": 22039.959309737034, 00:34:25.971 "mibps": 86.09359105366029, 00:34:25.971 "io_failed": 0, 00:34:25.971 "io_timeout": 0, 00:34:25.971 "avg_latency_us": 5804.508473924363, 00:34:25.971 "min_latency_us": 3099.306666666667, 00:34:25.971 "max_latency_us": 31238.826666666668 00:34:25.971 } 00:34:25.971 ], 00:34:25.971 "core_count": 1 00:34:25.971 } 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3598439 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' -z 3598439 ']' 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # kill -0 3598439 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # uname 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3598439 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3598439' 00:34:25.971 killing process with pid 3598439 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # kill 3598439 00:34:25.971 Received shutdown signal, test time was about 10.000000 seconds 00:34:25.971 00:34:25.971 Latency(us) 00:34:25.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.971 =================================================================================================================== 00:34:25.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:25.971 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@977 -- # wait 3598439 00:34:26.232 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:26.232 09:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:26.492 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:26.492 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3594975 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3594975 00:34:26.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3594975 Killed "${NVMF_APP[@]}" "$@" 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3600788 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3600788 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # '[' -z 3600788 ']' 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local max_retries=100 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@843 -- # xtrace_disable 00:34:26.753 09:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:26.753 [2024-10-07 09:55:26.348572] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:26.753 [2024-10-07 09:55:26.349621] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:34:26.753 [2024-10-07 09:55:26.349682] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.013 [2024-10-07 09:55:26.436116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.013 [2024-10-07 09:55:26.493596] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:27.013 [2024-10-07 09:55:26.493637] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:27.013 [2024-10-07 09:55:26.493643] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:27.013 [2024-10-07 09:55:26.493648] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:27.013 [2024-10-07 09:55:26.493655] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:27.013 [2024-10-07 09:55:26.494154] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.013 [2024-10-07 09:55:26.544627] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:27.013 [2024-10-07 09:55:26.544816] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:27.585 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:34:27.585 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@867 -- # return 0 00:34:27.585 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:27.585 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@733 -- # xtrace_disable 00:34:27.585 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:27.585 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.585 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:27.846 [2024-10-07 09:55:27.352665] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:27.846 [2024-10-07 09:55:27.352931] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:27.846 [2024-10-07 09:55:27.353020] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:27.846 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:27.846 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb 00:34:27.846 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_name=d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb 00:34:27.846 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:34:27.846 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local i 00:34:27.846 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:34:27.846 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:34:27.846 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:28.109 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb -t 2000 00:34:28.109 [ 00:34:28.109 { 00:34:28.109 "name": "d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb", 00:34:28.109 "aliases": [ 00:34:28.109 "lvs/lvol" 00:34:28.109 ], 00:34:28.109 "product_name": "Logical Volume", 00:34:28.109 "block_size": 4096, 00:34:28.109 "num_blocks": 38912, 00:34:28.109 "uuid": "d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb", 00:34:28.109 "assigned_rate_limits": { 00:34:28.109 "rw_ios_per_sec": 0, 00:34:28.109 "rw_mbytes_per_sec": 0, 00:34:28.109 "r_mbytes_per_sec": 0, 00:34:28.109 "w_mbytes_per_sec": 0 00:34:28.109 }, 00:34:28.109 "claimed": false, 00:34:28.109 "zoned": false, 00:34:28.109 "supported_io_types": { 00:34:28.109 "read": true, 00:34:28.109 "write": true, 00:34:28.109 "unmap": true, 00:34:28.109 "flush": false, 00:34:28.109 "reset": true, 00:34:28.109 "nvme_admin": false, 00:34:28.109 "nvme_io": false, 00:34:28.109 "nvme_io_md": false, 00:34:28.109 "write_zeroes": true, 00:34:28.109 "zcopy": false, 00:34:28.109 "get_zone_info": false, 00:34:28.109 "zone_management": false, 00:34:28.109 "zone_append": false, 00:34:28.109 "compare": false, 00:34:28.109 "compare_and_write": false, 00:34:28.109 "abort": false, 00:34:28.109 "seek_hole": true, 00:34:28.109 "seek_data": true, 00:34:28.109 "copy": false, 00:34:28.109 "nvme_iov_md": false 00:34:28.109 }, 00:34:28.109 "driver_specific": { 00:34:28.109 "lvol": { 00:34:28.109 "lvol_store_uuid": "a294b70f-ed1f-42a1-bfd4-bfd475825848", 00:34:28.109 "base_bdev": "aio_bdev", 00:34:28.109 "thin_provision": false, 00:34:28.109 "num_allocated_clusters": 38, 00:34:28.109 "snapshot": false, 00:34:28.109 "clone": false, 00:34:28.109 "esnap_clone": false 00:34:28.109 } 00:34:28.109 } 00:34:28.109 } 00:34:28.109 ] 00:34:28.109 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # return 0 00:34:28.109 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:28.109 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:28.371 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:28.371 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:28.371 09:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:28.633 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:28.633 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:28.633 [2024-10-07 09:55:28.270716] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:28.892 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:28.892 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # local es=0 00:34:28.892 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:28.892 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:28.892 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:34:28.892 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:28.892 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:34:28.892 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@647 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:28.893 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:34:28.893 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@647 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:28.893 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@647 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:28.893 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@656 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:28.893 request: 00:34:28.893 { 00:34:28.893 "uuid": "a294b70f-ed1f-42a1-bfd4-bfd475825848", 00:34:28.893 "method": "bdev_lvol_get_lvstores", 00:34:28.893 "req_id": 1 00:34:28.893 } 00:34:28.893 Got JSON-RPC error response 00:34:28.893 response: 00:34:28.893 { 00:34:28.893 "code": -19, 00:34:28.893 "message": "No such device" 00:34:28.893 } 00:34:28.893 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@656 -- # es=1 00:34:28.893 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:34:28.893 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:34:28.893 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:34:28.893 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:29.153 aio_bdev 00:34:29.153 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb 00:34:29.153 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_name=d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb 00:34:29.153 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:34:29.153 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local i 00:34:29.153 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:34:29.153 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:34:29.153 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:29.415 09:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb -t 2000 00:34:29.415 [ 00:34:29.415 { 00:34:29.415 "name": "d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb", 00:34:29.415 "aliases": [ 00:34:29.415 "lvs/lvol" 00:34:29.415 ], 00:34:29.415 "product_name": "Logical Volume", 00:34:29.415 "block_size": 4096, 00:34:29.415 "num_blocks": 38912, 00:34:29.415 "uuid": "d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb", 00:34:29.415 "assigned_rate_limits": { 00:34:29.415 "rw_ios_per_sec": 0, 00:34:29.415 "rw_mbytes_per_sec": 0, 00:34:29.415 "r_mbytes_per_sec": 0, 00:34:29.415 "w_mbytes_per_sec": 0 00:34:29.415 }, 00:34:29.415 "claimed": false, 00:34:29.415 "zoned": false, 00:34:29.415 "supported_io_types": { 00:34:29.415 "read": true, 00:34:29.415 "write": true, 00:34:29.415 "unmap": true, 00:34:29.415 "flush": false, 00:34:29.415 "reset": true, 00:34:29.415 "nvme_admin": false, 00:34:29.415 "nvme_io": false, 00:34:29.415 "nvme_io_md": false, 00:34:29.415 "write_zeroes": true, 00:34:29.415 "zcopy": false, 00:34:29.415 "get_zone_info": false, 00:34:29.415 "zone_management": false, 00:34:29.415 "zone_append": false, 00:34:29.415 "compare": false, 00:34:29.415 "compare_and_write": false, 00:34:29.415 "abort": false, 00:34:29.415 "seek_hole": true, 00:34:29.415 "seek_data": true, 00:34:29.415 "copy": false, 00:34:29.415 "nvme_iov_md": false 00:34:29.415 }, 00:34:29.415 "driver_specific": { 00:34:29.415 "lvol": { 00:34:29.415 "lvol_store_uuid": "a294b70f-ed1f-42a1-bfd4-bfd475825848", 00:34:29.415 "base_bdev": "aio_bdev", 00:34:29.415 "thin_provision": false, 00:34:29.415 "num_allocated_clusters": 38, 00:34:29.415 "snapshot": false, 00:34:29.415 "clone": false, 00:34:29.415 "esnap_clone": false 00:34:29.415 } 00:34:29.415 } 00:34:29.415 } 00:34:29.415 ] 00:34:29.415 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # return 0 00:34:29.415 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:29.415 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:29.676 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:29.676 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:29.676 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:29.936 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:29.936 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d94f5b9a-c2fa-4aa2-a84a-3e5afb83b2eb 00:34:29.936 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a294b70f-ed1f-42a1-bfd4-bfd475825848 00:34:30.197 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:30.458 00:34:30.458 real 0m17.533s 00:34:30.458 user 0m35.430s 00:34:30.458 sys 0m3.113s 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # xtrace_disable 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:30.458 ************************************ 00:34:30.458 END TEST lvs_grow_dirty 00:34:30.458 ************************************ 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # type=--id 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # id=0 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # '[' --id = --pid ']' 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # shm_files=nvmf_trace.0 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # [[ -z nvmf_trace.0 ]] 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # for n in $shm_files 00:34:30.458 09:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:30.458 nvmf_trace.0 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@826 -- # return 0 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:30.458 rmmod nvme_tcp 00:34:30.458 rmmod nvme_fabrics 00:34:30.458 rmmod nvme_keyring 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3600788 ']' 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3600788 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' -z 3600788 ']' 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # kill -0 3600788 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # uname 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:34:30.458 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3600788 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3600788' 00:34:30.720 killing process with pid 3600788 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # kill 3600788 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@977 -- # wait 3600788 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.720 09:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:33.271 00:34:33.271 real 0m45.089s 00:34:33.271 user 0m54.211s 00:34:33.271 sys 0m10.749s 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # xtrace_disable 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:33.271 ************************************ 00:34:33.271 END TEST nvmf_lvs_grow 00:34:33.271 ************************************ 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:33.271 ************************************ 00:34:33.271 START TEST nvmf_bdev_io_wait 00:34:33.271 ************************************ 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:33.271 * Looking for test storage... 00:34:33.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1626 -- # lcov --version 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:33.271 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:34:33.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.271 --rc genhtml_branch_coverage=1 00:34:33.271 --rc genhtml_function_coverage=1 00:34:33.272 --rc genhtml_legend=1 00:34:33.272 --rc geninfo_all_blocks=1 00:34:33.272 --rc geninfo_unexecuted_blocks=1 00:34:33.272 00:34:33.272 ' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:34:33.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.272 --rc genhtml_branch_coverage=1 00:34:33.272 --rc genhtml_function_coverage=1 00:34:33.272 --rc genhtml_legend=1 00:34:33.272 --rc geninfo_all_blocks=1 00:34:33.272 --rc geninfo_unexecuted_blocks=1 00:34:33.272 00:34:33.272 ' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:34:33.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.272 --rc genhtml_branch_coverage=1 00:34:33.272 --rc genhtml_function_coverage=1 00:34:33.272 --rc genhtml_legend=1 00:34:33.272 --rc geninfo_all_blocks=1 00:34:33.272 --rc geninfo_unexecuted_blocks=1 00:34:33.272 00:34:33.272 ' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:34:33.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.272 --rc genhtml_branch_coverage=1 00:34:33.272 --rc genhtml_function_coverage=1 00:34:33.272 --rc genhtml_legend=1 00:34:33.272 --rc geninfo_all_blocks=1 00:34:33.272 --rc geninfo_unexecuted_blocks=1 00:34:33.272 00:34:33.272 ' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:33.272 09:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:41.422 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:41.422 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:41.422 Found net devices under 0000:31:00.0: cvl_0_0 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:41.422 Found net devices under 0000:31:00.1: cvl_0_1 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:41.422 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:41.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:34:41.423 00:34:41.423 --- 10.0.0.2 ping statistics --- 00:34:41.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.423 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:41.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:34:41.423 00:34:41.423 --- 10.0.0.1 ping statistics --- 00:34:41.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.423 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3605916 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3605916 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # '[' -z 3605916 ']' 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local max_retries=100 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@843 -- # xtrace_disable 00:34:41.423 09:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.423 [2024-10-07 09:55:40.607042] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:41.423 [2024-10-07 09:55:40.608184] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:34:41.423 [2024-10-07 09:55:40.608237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.423 [2024-10-07 09:55:40.697470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:41.423 [2024-10-07 09:55:40.792090] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:41.423 [2024-10-07 09:55:40.792158] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:41.423 [2024-10-07 09:55:40.792167] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:41.423 [2024-10-07 09:55:40.792174] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:41.423 [2024-10-07 09:55:40.792180] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:41.423 [2024-10-07 09:55:40.794258] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.423 [2024-10-07 09:55:40.794420] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:41.423 [2024-10-07 09:55:40.794580] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.423 [2024-10-07 09:55:40.794580] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:41.423 [2024-10-07 09:55:40.794942] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@867 -- # return 0 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@733 -- # xtrace_disable 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:41.996 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.996 [2024-10-07 09:55:41.530935] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:41.996 [2024-10-07 09:55:41.532038] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:41.997 [2024-10-07 09:55:41.532182] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:41.997 [2024-10-07 09:55:41.532317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.997 [2024-10-07 09:55:41.543138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.997 Malloc0 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.997 [2024-10-07 09:55:41.627811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3606001 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3606004 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.997 { 00:34:41.997 "params": { 00:34:41.997 "name": "Nvme$subsystem", 00:34:41.997 "trtype": "$TEST_TRANSPORT", 00:34:41.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.997 "adrfam": "ipv4", 00:34:41.997 "trsvcid": "$NVMF_PORT", 00:34:41.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.997 "hdgst": ${hdgst:-false}, 00:34:41.997 "ddgst": ${ddgst:-false} 00:34:41.997 }, 00:34:41.997 "method": "bdev_nvme_attach_controller" 00:34:41.997 } 00:34:41.997 EOF 00:34:41.997 )") 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3606007 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.997 { 00:34:41.997 "params": { 00:34:41.997 "name": "Nvme$subsystem", 00:34:41.997 "trtype": "$TEST_TRANSPORT", 00:34:41.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.997 "adrfam": "ipv4", 00:34:41.997 "trsvcid": "$NVMF_PORT", 00:34:41.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.997 "hdgst": ${hdgst:-false}, 00:34:41.997 "ddgst": ${ddgst:-false} 00:34:41.997 }, 00:34:41.997 "method": "bdev_nvme_attach_controller" 00:34:41.997 } 00:34:41.997 EOF 00:34:41.997 )") 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3606011 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.997 { 00:34:41.997 "params": { 00:34:41.997 "name": "Nvme$subsystem", 00:34:41.997 "trtype": "$TEST_TRANSPORT", 00:34:41.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.997 "adrfam": "ipv4", 00:34:41.997 "trsvcid": "$NVMF_PORT", 00:34:41.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.997 "hdgst": ${hdgst:-false}, 00:34:41.997 "ddgst": ${ddgst:-false} 00:34:41.997 }, 00:34:41.997 "method": "bdev_nvme_attach_controller" 00:34:41.997 } 00:34:41.997 EOF 00:34:41.997 )") 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.997 { 00:34:41.997 "params": { 00:34:41.997 "name": "Nvme$subsystem", 00:34:41.997 "trtype": "$TEST_TRANSPORT", 00:34:41.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.997 "adrfam": "ipv4", 00:34:41.997 "trsvcid": "$NVMF_PORT", 00:34:41.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.997 "hdgst": ${hdgst:-false}, 00:34:41.997 "ddgst": ${ddgst:-false} 00:34:41.997 }, 00:34:41.997 "method": "bdev_nvme_attach_controller" 00:34:41.997 } 00:34:41.997 EOF 00:34:41.997 )") 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3606001 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:41.997 "params": { 00:34:41.997 "name": "Nvme1", 00:34:41.997 "trtype": "tcp", 00:34:41.997 "traddr": "10.0.0.2", 00:34:41.997 "adrfam": "ipv4", 00:34:41.997 "trsvcid": "4420", 00:34:41.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.997 "hdgst": false, 00:34:41.997 "ddgst": false 00:34:41.997 }, 00:34:41.997 "method": "bdev_nvme_attach_controller" 00:34:41.997 }' 00:34:41.997 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:34:41.998 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:34:41.998 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:41.998 "params": { 00:34:41.998 "name": "Nvme1", 00:34:41.998 "trtype": "tcp", 00:34:41.998 "traddr": "10.0.0.2", 00:34:41.998 "adrfam": "ipv4", 00:34:41.998 "trsvcid": "4420", 00:34:41.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.998 "hdgst": false, 00:34:41.998 "ddgst": false 00:34:41.998 }, 00:34:41.998 "method": "bdev_nvme_attach_controller" 00:34:41.998 }' 00:34:41.998 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:34:41.998 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:41.998 "params": { 00:34:41.998 "name": "Nvme1", 00:34:41.998 "trtype": "tcp", 00:34:41.998 "traddr": "10.0.0.2", 00:34:41.998 "adrfam": "ipv4", 00:34:41.998 "trsvcid": "4420", 00:34:41.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.998 "hdgst": false, 00:34:41.998 "ddgst": false 00:34:41.998 }, 00:34:41.998 "method": "bdev_nvme_attach_controller" 00:34:41.998 }' 00:34:41.998 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:34:41.998 09:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:41.998 "params": { 00:34:41.998 "name": "Nvme1", 00:34:41.998 "trtype": "tcp", 00:34:41.998 "traddr": "10.0.0.2", 00:34:41.998 "adrfam": "ipv4", 00:34:41.998 "trsvcid": "4420", 00:34:41.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.998 "hdgst": false, 00:34:41.998 "ddgst": false 00:34:41.998 }, 00:34:41.998 "method": "bdev_nvme_attach_controller" 00:34:41.998 }' 00:34:42.259 [2024-10-07 09:55:41.686046] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:34:42.259 [2024-10-07 09:55:41.686119] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:42.259 [2024-10-07 09:55:41.687382] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:34:42.259 [2024-10-07 09:55:41.687451] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:42.259 [2024-10-07 09:55:41.688693] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:34:42.259 [2024-10-07 09:55:41.688760] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:42.259 [2024-10-07 09:55:41.688860] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:34:42.259 [2024-10-07 09:55:41.688923] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:42.259 [2024-10-07 09:55:41.903267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.521 [2024-10-07 09:55:41.977724] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:34:42.521 [2024-10-07 09:55:41.999659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.521 [2024-10-07 09:55:42.068173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.521 [2024-10-07 09:55:42.074094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:34:42.521 [2024-10-07 09:55:42.136521] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:34:42.521 [2024-10-07 09:55:42.138813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.825 [2024-10-07 09:55:42.209388] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:34:42.825 Running I/O for 1 seconds... 00:34:42.825 Running I/O for 1 seconds... 00:34:43.127 Running I/O for 1 seconds... 00:34:43.413 Running I/O for 1 seconds... 00:34:44.033 183384.00 IOPS, 716.34 MiB/s 00:34:44.033 Latency(us) 00:34:44.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.033 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:44.033 Nvme1n1 : 1.00 183002.02 714.85 0.00 0.00 695.58 334.51 2075.31 00:34:44.033 =================================================================================================================== 00:34:44.033 Total : 183002.02 714.85 0.00 0.00 695.58 334.51 2075.31 00:34:44.033 11571.00 IOPS, 45.20 MiB/s 00:34:44.033 Latency(us) 00:34:44.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.033 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:44.033 Nvme1n1 : 1.01 11619.65 45.39 0.00 0.00 10971.93 2334.72 13707.95 00:34:44.033 =================================================================================================================== 00:34:44.033 Total : 11619.65 45.39 0.00 0.00 10971.93 2334.72 13707.95 00:34:44.033 9793.00 IOPS, 38.25 MiB/s 00:34:44.033 Latency(us) 00:34:44.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.033 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:44.033 Nvme1n1 : 1.01 9836.64 38.42 0.00 0.00 12954.40 5434.03 17585.49 00:34:44.033 =================================================================================================================== 00:34:44.033 Total : 9836.64 38.42 0.00 0.00 12954.40 5434.03 17585.49 00:34:44.033 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3606004 00:34:44.033 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3606007 00:34:44.033 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3606011 00:34:44.316 11670.00 IOPS, 45.59 MiB/s 00:34:44.316 Latency(us) 00:34:44.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.316 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:44.316 Nvme1n1 : 1.01 11749.13 45.90 0.00 0.00 10857.13 2949.12 19442.35 00:34:44.316 =================================================================================================================== 00:34:44.316 Total : 11749.13 45.90 0.00 0.00 10857.13 2949.12 19442.35 00:34:44.577 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:44.577 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:44.577 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:44.577 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:44.577 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:44.577 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:44.577 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:44.577 09:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:44.577 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:44.577 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:44.578 rmmod nvme_tcp 00:34:44.578 rmmod nvme_fabrics 00:34:44.578 rmmod nvme_keyring 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3605916 ']' 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3605916 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' -z 3605916 ']' 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # kill -0 3605916 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # uname 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3605916 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3605916' 00:34:44.578 killing process with pid 3605916 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # kill 3605916 00:34:44.578 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@977 -- # wait 3605916 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.840 09:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.755 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:47.016 00:34:47.016 real 0m13.912s 00:34:47.016 user 0m18.067s 00:34:47.016 sys 0m8.361s 00:34:47.017 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # xtrace_disable 00:34:47.017 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:47.017 ************************************ 00:34:47.017 END TEST nvmf_bdev_io_wait 00:34:47.017 ************************************ 00:34:47.017 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:47.017 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:34:47.017 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:34:47.017 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:47.017 ************************************ 00:34:47.017 START TEST nvmf_queue_depth 00:34:47.017 ************************************ 00:34:47.017 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:47.017 * Looking for test storage... 00:34:47.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:47.017 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:34:47.017 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1626 -- # lcov --version 00:34:47.017 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:34:47.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.279 --rc genhtml_branch_coverage=1 00:34:47.279 --rc genhtml_function_coverage=1 00:34:47.279 --rc genhtml_legend=1 00:34:47.279 --rc geninfo_all_blocks=1 00:34:47.279 --rc geninfo_unexecuted_blocks=1 00:34:47.279 00:34:47.279 ' 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:34:47.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.279 --rc genhtml_branch_coverage=1 00:34:47.279 --rc genhtml_function_coverage=1 00:34:47.279 --rc genhtml_legend=1 00:34:47.279 --rc geninfo_all_blocks=1 00:34:47.279 --rc geninfo_unexecuted_blocks=1 00:34:47.279 00:34:47.279 ' 00:34:47.279 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:34:47.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.279 --rc genhtml_branch_coverage=1 00:34:47.280 --rc genhtml_function_coverage=1 00:34:47.280 --rc genhtml_legend=1 00:34:47.280 --rc geninfo_all_blocks=1 00:34:47.280 --rc geninfo_unexecuted_blocks=1 00:34:47.280 00:34:47.280 ' 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:34:47.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.280 --rc genhtml_branch_coverage=1 00:34:47.280 --rc genhtml_function_coverage=1 00:34:47.280 --rc genhtml_legend=1 00:34:47.280 --rc geninfo_all_blocks=1 00:34:47.280 --rc geninfo_unexecuted_blocks=1 00:34:47.280 00:34:47.280 ' 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:47.280 09:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:55.424 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:55.425 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:55.425 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:55.425 Found net devices under 0000:31:00.0: cvl_0_0 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:55.425 Found net devices under 0000:31:00.1: cvl_0_1 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:55.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:55.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:34:55.425 00:34:55.425 --- 10.0.0.2 ping statistics --- 00:34:55.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.425 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:55.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:55.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:34:55.425 00:34:55.425 --- 10.0.0.1 ping statistics --- 00:34:55.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.425 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3610770 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3610770 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@834 -- # '[' -z 3610770 ']' 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local max_retries=100 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.425 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@843 -- # xtrace_disable 00:34:55.426 09:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.426 [2024-10-07 09:55:54.553328] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:55.426 [2024-10-07 09:55:54.554476] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:34:55.426 [2024-10-07 09:55:54.554524] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.426 [2024-10-07 09:55:54.650322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.426 [2024-10-07 09:55:54.742735] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.426 [2024-10-07 09:55:54.742797] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.426 [2024-10-07 09:55:54.742806] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.426 [2024-10-07 09:55:54.742812] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.426 [2024-10-07 09:55:54.742819] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.426 [2024-10-07 09:55:54.743593] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.426 [2024-10-07 09:55:54.818891] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:55.426 [2024-10-07 09:55:54.819179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@867 -- # return 0 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@733 -- # xtrace_disable 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.998 [2024-10-07 09:55:55.436463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.998 Malloc0 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.998 [2024-10-07 09:55:55.516682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3611074 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3611074 /var/tmp/bdevperf.sock 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@834 -- # '[' -z 3611074 ']' 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local max_retries=100 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:55.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@843 -- # xtrace_disable 00:34:55.998 09:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.998 [2024-10-07 09:55:55.575778] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:34:55.998 [2024-10-07 09:55:55.575841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3611074 ] 00:34:55.998 [2024-10-07 09:55:55.657609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.259 [2024-10-07 09:55:55.753237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.830 09:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:34:56.830 09:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@867 -- # return 0 00:34:56.830 09:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:56.830 09:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@564 -- # xtrace_disable 00:34:56.830 09:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:57.091 NVMe0n1 00:34:57.091 09:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:34:57.091 09:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:57.091 Running I/O for 10 seconds... 00:35:07.406 8559.00 IOPS, 33.43 MiB/s 8799.50 IOPS, 34.37 MiB/s 9826.67 IOPS, 38.39 MiB/s 10746.75 IOPS, 41.98 MiB/s 11277.20 IOPS, 44.05 MiB/s 11766.33 IOPS, 45.96 MiB/s 12014.29 IOPS, 46.93 MiB/s 12277.38 IOPS, 47.96 MiB/s 12426.89 IOPS, 48.54 MiB/s 12604.70 IOPS, 49.24 MiB/s 00:35:07.406 Latency(us) 00:35:07.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.406 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:07.406 Verification LBA range: start 0x0 length 0x4000 00:35:07.406 NVMe0n1 : 10.06 12629.80 49.34 0.00 0.00 80810.48 24466.77 74274.13 00:35:07.406 =================================================================================================================== 00:35:07.406 Total : 12629.80 49.34 0.00 0.00 80810.48 24466.77 74274.13 00:35:07.406 { 00:35:07.406 "results": [ 00:35:07.406 { 00:35:07.406 "job": "NVMe0n1", 00:35:07.406 "core_mask": "0x1", 00:35:07.406 "workload": "verify", 00:35:07.406 "status": "finished", 00:35:07.406 "verify_range": { 00:35:07.406 "start": 0, 00:35:07.406 "length": 16384 00:35:07.406 }, 00:35:07.406 "queue_depth": 1024, 00:35:07.406 "io_size": 4096, 00:35:07.406 "runtime": 10.060097, 00:35:07.406 "iops": 12629.798698760062, 00:35:07.406 "mibps": 49.33515116703149, 00:35:07.406 "io_failed": 0, 00:35:07.406 "io_timeout": 0, 00:35:07.406 "avg_latency_us": 80810.48154282461, 00:35:07.406 "min_latency_us": 24466.773333333334, 00:35:07.406 "max_latency_us": 74274.13333333333 00:35:07.406 } 00:35:07.406 ], 00:35:07.406 "core_count": 1 00:35:07.406 } 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3611074 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' -z 3611074 ']' 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # kill -0 3611074 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # uname 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3611074 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3611074' 00:35:07.406 killing process with pid 3611074 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # kill 3611074 00:35:07.406 Received shutdown signal, test time was about 10.000000 seconds 00:35:07.406 00:35:07.406 Latency(us) 00:35:07.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.406 =================================================================================================================== 00:35:07.406 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@977 -- # wait 3611074 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:07.406 rmmod nvme_tcp 00:35:07.406 rmmod nvme_fabrics 00:35:07.406 rmmod nvme_keyring 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3610770 ']' 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3610770 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' -z 3610770 ']' 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # kill -0 3610770 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # uname 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:35:07.406 09:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3610770 00:35:07.406 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:35:07.406 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:35:07.406 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3610770' 00:35:07.406 killing process with pid 3610770 00:35:07.406 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # kill 3610770 00:35:07.406 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@977 -- # wait 3610770 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.668 09:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.583 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:09.844 00:35:09.844 real 0m22.743s 00:35:09.844 user 0m24.735s 00:35:09.844 sys 0m7.653s 00:35:09.844 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # xtrace_disable 00:35:09.844 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:09.844 ************************************ 00:35:09.844 END TEST nvmf_queue_depth 00:35:09.844 ************************************ 00:35:09.844 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:09.844 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:35:09.844 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:35:09.844 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:09.844 ************************************ 00:35:09.844 START TEST nvmf_target_multipath 00:35:09.844 ************************************ 00:35:09.844 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:09.844 * Looking for test storage... 00:35:09.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:09.844 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:35:09.844 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1626 -- # lcov --version 00:35:09.844 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:35:10.106 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:35:10.106 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.106 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.106 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.106 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.106 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.106 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:35:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.107 --rc genhtml_branch_coverage=1 00:35:10.107 --rc genhtml_function_coverage=1 00:35:10.107 --rc genhtml_legend=1 00:35:10.107 --rc geninfo_all_blocks=1 00:35:10.107 --rc geninfo_unexecuted_blocks=1 00:35:10.107 00:35:10.107 ' 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:35:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.107 --rc genhtml_branch_coverage=1 00:35:10.107 --rc genhtml_function_coverage=1 00:35:10.107 --rc genhtml_legend=1 00:35:10.107 --rc geninfo_all_blocks=1 00:35:10.107 --rc geninfo_unexecuted_blocks=1 00:35:10.107 00:35:10.107 ' 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:35:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.107 --rc genhtml_branch_coverage=1 00:35:10.107 --rc genhtml_function_coverage=1 00:35:10.107 --rc genhtml_legend=1 00:35:10.107 --rc geninfo_all_blocks=1 00:35:10.107 --rc geninfo_unexecuted_blocks=1 00:35:10.107 00:35:10.107 ' 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:35:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.107 --rc genhtml_branch_coverage=1 00:35:10.107 --rc genhtml_function_coverage=1 00:35:10.107 --rc genhtml_legend=1 00:35:10.107 --rc geninfo_all_blocks=1 00:35:10.107 --rc geninfo_unexecuted_blocks=1 00:35:10.107 00:35:10.107 ' 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.107 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:35:10.108 09:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:18.258 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:18.258 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:18.258 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:18.259 Found net devices under 0000:31:00.0: cvl_0_0 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:18.259 Found net devices under 0000:31:00.1: cvl_0_1 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:18.259 09:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:18.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:18.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:35:18.259 00:35:18.259 --- 10.0.0.2 ping statistics --- 00:35:18.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.259 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:18.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:18.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:35:18.259 00:35:18.259 --- 10.0.0.1 ping statistics --- 00:35:18.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.259 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:18.259 only one NIC for nvmf test 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.259 rmmod nvme_tcp 00:35:18.259 rmmod nvme_fabrics 00:35:18.259 rmmod nvme_keyring 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:18.259 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:18.260 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:18.260 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:18.260 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:35:18.260 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:18.260 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:35:18.260 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:18.260 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:18.260 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.260 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.260 09:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:20.179 00:35:20.179 real 0m10.176s 00:35:20.179 user 0m2.276s 00:35:20.179 sys 0m5.854s 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # xtrace_disable 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:20.179 ************************************ 00:35:20.179 END TEST nvmf_target_multipath 00:35:20.179 ************************************ 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:20.179 ************************************ 00:35:20.179 START TEST nvmf_zcopy 00:35:20.179 ************************************ 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:20.179 * Looking for test storage... 00:35:20.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1626 -- # lcov --version 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:20.179 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:35:20.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.180 --rc genhtml_branch_coverage=1 00:35:20.180 --rc genhtml_function_coverage=1 00:35:20.180 --rc genhtml_legend=1 00:35:20.180 --rc geninfo_all_blocks=1 00:35:20.180 --rc geninfo_unexecuted_blocks=1 00:35:20.180 00:35:20.180 ' 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:35:20.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.180 --rc genhtml_branch_coverage=1 00:35:20.180 --rc genhtml_function_coverage=1 00:35:20.180 --rc genhtml_legend=1 00:35:20.180 --rc geninfo_all_blocks=1 00:35:20.180 --rc geninfo_unexecuted_blocks=1 00:35:20.180 00:35:20.180 ' 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:35:20.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.180 --rc genhtml_branch_coverage=1 00:35:20.180 --rc genhtml_function_coverage=1 00:35:20.180 --rc genhtml_legend=1 00:35:20.180 --rc geninfo_all_blocks=1 00:35:20.180 --rc geninfo_unexecuted_blocks=1 00:35:20.180 00:35:20.180 ' 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:35:20.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.180 --rc genhtml_branch_coverage=1 00:35:20.180 --rc genhtml_function_coverage=1 00:35:20.180 --rc genhtml_legend=1 00:35:20.180 --rc geninfo_all_blocks=1 00:35:20.180 --rc geninfo_unexecuted_blocks=1 00:35:20.180 00:35:20.180 ' 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:20.180 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.443 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:20.444 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:28.594 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:28.594 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:35:28.594 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:28.594 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:28.594 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:28.594 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:28.595 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:28.595 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:28.595 Found net devices under 0000:31:00.0: cvl_0_0 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:28.595 Found net devices under 0000:31:00.1: cvl_0_1 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:35:28.595 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:28.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:28.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:35:28.596 00:35:28.596 --- 10.0.0.2 ping statistics --- 00:35:28.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.596 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:28.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:28.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:35:28.596 00:35:28.596 --- 10.0.0.1 ping statistics --- 00:35:28.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.596 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3621702 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3621702 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@834 -- # '[' -z 3621702 ']' 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local max_retries=100 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@843 -- # xtrace_disable 00:35:28.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:28.596 [2024-10-07 09:56:27.663686] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:28.596 [2024-10-07 09:56:27.664849] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:35:28.596 [2024-10-07 09:56:27.664902] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:28.596 [2024-10-07 09:56:27.755044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.597 [2024-10-07 09:56:27.848148] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:28.597 [2024-10-07 09:56:27.848214] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:28.597 [2024-10-07 09:56:27.848223] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:28.597 [2024-10-07 09:56:27.848230] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:28.597 [2024-10-07 09:56:27.848237] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:28.597 [2024-10-07 09:56:27.849071] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:28.597 [2024-10-07 09:56:27.924711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:28.597 [2024-10-07 09:56:27.924994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:28.866 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:35:28.866 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@867 -- # return 0 00:35:28.866 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:28.866 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@733 -- # xtrace_disable 00:35:28.866 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:28.866 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:28.866 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:28.866 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:28.866 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:35:28.866 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:28.866 [2024-10-07 09:56:28.521983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:29.169 [2024-10-07 09:56:28.550257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:29.169 malloc0 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:29.169 { 00:35:29.169 "params": { 00:35:29.169 "name": "Nvme$subsystem", 00:35:29.169 "trtype": "$TEST_TRANSPORT", 00:35:29.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.169 "adrfam": "ipv4", 00:35:29.169 "trsvcid": "$NVMF_PORT", 00:35:29.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.169 "hdgst": ${hdgst:-false}, 00:35:29.169 "ddgst": ${ddgst:-false} 00:35:29.169 }, 00:35:29.169 "method": "bdev_nvme_attach_controller" 00:35:29.169 } 00:35:29.169 EOF 00:35:29.169 )") 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:35:29.169 09:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:29.169 "params": { 00:35:29.169 "name": "Nvme1", 00:35:29.169 "trtype": "tcp", 00:35:29.169 "traddr": "10.0.0.2", 00:35:29.169 "adrfam": "ipv4", 00:35:29.169 "trsvcid": "4420", 00:35:29.169 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:29.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:29.169 "hdgst": false, 00:35:29.169 "ddgst": false 00:35:29.169 }, 00:35:29.169 "method": "bdev_nvme_attach_controller" 00:35:29.169 }' 00:35:29.169 [2024-10-07 09:56:28.674067] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:35:29.169 [2024-10-07 09:56:28.674131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3621897 ] 00:35:29.169 [2024-10-07 09:56:28.755407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.472 [2024-10-07 09:56:28.851510] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.472 Running I/O for 10 seconds... 00:35:39.780 6325.00 IOPS, 49.41 MiB/s 6372.50 IOPS, 49.79 MiB/s 6393.67 IOPS, 49.95 MiB/s 6406.25 IOPS, 50.05 MiB/s 6577.80 IOPS, 51.39 MiB/s 7069.50 IOPS, 55.23 MiB/s 7423.43 IOPS, 58.00 MiB/s 7687.75 IOPS, 60.06 MiB/s 7896.33 IOPS, 61.69 MiB/s 8059.10 IOPS, 62.96 MiB/s 00:35:39.780 Latency(us) 00:35:39.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.780 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:39.780 Verification LBA range: start 0x0 length 0x1000 00:35:39.780 Nvme1n1 : 10.01 8063.41 63.00 0.00 0.00 15827.41 2211.84 29054.29 00:35:39.780 =================================================================================================================== 00:35:39.780 Total : 8063.41 63.00 0.00 0.00 15827.41 2211.84 29054.29 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3623903 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:39.780 [2024-10-07 09:56:39.209485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.209514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:39.780 { 00:35:39.780 "params": { 00:35:39.780 "name": "Nvme$subsystem", 00:35:39.780 "trtype": "$TEST_TRANSPORT", 00:35:39.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.780 "adrfam": "ipv4", 00:35:39.780 "trsvcid": "$NVMF_PORT", 00:35:39.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.780 "hdgst": ${hdgst:-false}, 00:35:39.780 "ddgst": ${ddgst:-false} 00:35:39.780 }, 00:35:39.780 "method": "bdev_nvme_attach_controller" 00:35:39.780 } 00:35:39.780 EOF 00:35:39.780 )") 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:35:39.780 [2024-10-07 09:56:39.217455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.217464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:35:39.780 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:39.780 "params": { 00:35:39.780 "name": "Nvme1", 00:35:39.780 "trtype": "tcp", 00:35:39.780 "traddr": "10.0.0.2", 00:35:39.780 "adrfam": "ipv4", 00:35:39.780 "trsvcid": "4420", 00:35:39.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:39.780 "hdgst": false, 00:35:39.780 "ddgst": false 00:35:39.780 }, 00:35:39.780 "method": "bdev_nvme_attach_controller" 00:35:39.780 }' 00:35:39.780 [2024-10-07 09:56:39.225452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.225460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.233452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.233459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.241453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.241459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.253454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.253461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.257442] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:35:39.780 [2024-10-07 09:56:39.257489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3623903 ] 00:35:39.780 [2024-10-07 09:56:39.265455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.265462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.277452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.277459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.289453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.289461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.301453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.301460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.313453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.313459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.325453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.325460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.333454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.333461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.335427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.780 [2024-10-07 09:56:39.341456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.341466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.349454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.780 [2024-10-07 09:56:39.349461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.780 [2024-10-07 09:56:39.357454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.357461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.781 [2024-10-07 09:56:39.365455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.365465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.781 [2024-10-07 09:56:39.373453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.373464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.781 [2024-10-07 09:56:39.381452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.381460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.781 [2024-10-07 09:56:39.389453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.389461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.781 [2024-10-07 09:56:39.390545] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.781 [2024-10-07 09:56:39.397453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.397460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.781 [2024-10-07 09:56:39.405457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.405469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.781 [2024-10-07 09:56:39.413459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.413471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.781 [2024-10-07 09:56:39.421456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.421465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.781 [2024-10-07 09:56:39.429453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.429461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.781 [2024-10-07 09:56:39.437454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.781 [2024-10-07 09:56:39.437463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.445453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.445461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.453452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.453460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.461464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.461481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.469457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.469467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.477456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.477465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.485456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.485467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.493454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.493463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.501456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.501466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.509686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.509698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.517459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.517471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 Running I/O for 5 seconds... 00:35:40.042 [2024-10-07 09:56:39.525456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.525467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.042 [2024-10-07 09:56:39.536381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.042 [2024-10-07 09:56:39.536398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.549471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.549487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.561843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.561858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.574816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.574831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.584148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.584163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.597098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.597117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.610100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.610114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.621041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.621056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.634183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.634198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.645428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.645443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.658150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.658164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.669337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.669352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.682023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.682037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.043 [2024-10-07 09:56:39.692898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.043 [2024-10-07 09:56:39.692913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.706003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.706017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.717289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.717304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.730117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.730131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.742562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.742577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.752995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.753010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.766003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.766017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.777117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.777132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.790231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.790244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.801783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.801797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.814319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.814332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.825425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.825443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.831712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.831726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.845076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.845090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.857739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.857753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.870024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.870037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.882227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.882242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.893284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.893299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.906135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.906148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.918382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.918396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.929224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.929239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.942048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.942063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.953567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.953582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.304 [2024-10-07 09:56:39.959671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.304 [2024-10-07 09:56:39.959685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:39.968530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:39.968545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:39.981527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:39.981542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:39.994277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:39.994292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.005983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.005997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.018018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.018032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.030927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.030942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.044145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.044168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.052825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.052841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.065832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.065847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.077622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.077636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.089268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.089283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.102316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.102331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.112992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.113007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.126335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.126350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.137298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.137313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.150663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.150678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.160141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.160155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.173088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.173102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.185716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.185730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.198886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.198902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.208169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.208183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.565 [2024-10-07 09:56:40.221196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.565 [2024-10-07 09:56:40.221211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.826 [2024-10-07 09:56:40.234260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.826 [2024-10-07 09:56:40.234275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.826 [2024-10-07 09:56:40.244723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.826 [2024-10-07 09:56:40.244738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.826 [2024-10-07 09:56:40.258016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.826 [2024-10-07 09:56:40.258030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.826 [2024-10-07 09:56:40.269176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.826 [2024-10-07 09:56:40.269195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.826 [2024-10-07 09:56:40.282133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.826 [2024-10-07 09:56:40.282147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.826 [2024-10-07 09:56:40.293283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.826 [2024-10-07 09:56:40.293297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.826 [2024-10-07 09:56:40.306029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.826 [2024-10-07 09:56:40.306043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.826 [2024-10-07 09:56:40.317337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.826 [2024-10-07 09:56:40.317352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.826 [2024-10-07 09:56:40.330115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.826 [2024-10-07 09:56:40.330129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.826 [2024-10-07 09:56:40.341574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.826 [2024-10-07 09:56:40.341587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.354601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.354620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.364163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.364177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.377429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.377444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.389834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.389848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.402456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.402470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.411862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.411876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.420954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.420969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.434068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.434083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.445304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.445319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.457824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.457839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.469421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.469435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.475513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.475528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.827 [2024-10-07 09:56:40.484457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.827 [2024-10-07 09:56:40.484471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.497139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.497154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.509621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.509636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.522487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.522501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 18769.00 IOPS, 146.63 MiB/s [2024-10-07 09:56:40.533638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.533652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.539801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.539815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.548849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.548863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.562155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.562169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.573783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.573797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.586470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.586485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.597472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.597486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.610566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.610580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.619978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.619994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.632762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.632777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.645914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.645928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.658242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.658256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.669283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.669297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.682324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.682338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.691928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.691942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.705537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.705552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.718273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.718288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.730597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.730611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.088 [2024-10-07 09:56:40.741151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.088 [2024-10-07 09:56:40.741165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.754156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.754170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.766622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.766636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.777343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.777357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.790169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.790183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.805064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.805078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.817737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.817750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.829110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.829124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.841867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.841881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.854870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.854884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.864771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.864785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.877984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.877997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.890354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.890368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.900212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.900226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.913415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.913430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.925339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.925357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.938546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.938560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.950307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.950321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.962214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.349 [2024-10-07 09:56:40.962228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.349 [2024-10-07 09:56:40.974374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.350 [2024-10-07 09:56:40.974388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.350 [2024-10-07 09:56:40.985338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.350 [2024-10-07 09:56:40.985352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.350 [2024-10-07 09:56:40.998311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.350 [2024-10-07 09:56:40.998326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.350 [2024-10-07 09:56:41.007911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.350 [2024-10-07 09:56:41.007926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.610 [2024-10-07 09:56:41.015896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.610 [2024-10-07 09:56:41.015912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.610 [2024-10-07 09:56:41.024197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.610 [2024-10-07 09:56:41.024211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.610 [2024-10-07 09:56:41.036944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.610 [2024-10-07 09:56:41.036958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.610 [2024-10-07 09:56:41.049659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.610 [2024-10-07 09:56:41.049675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.610 [2024-10-07 09:56:41.055751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.610 [2024-10-07 09:56:41.055766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.610 [2024-10-07 09:56:41.065261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.610 [2024-10-07 09:56:41.065275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.610 [2024-10-07 09:56:41.077879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.610 [2024-10-07 09:56:41.077894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.610 [2024-10-07 09:56:41.088943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.610 [2024-10-07 09:56:41.088958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.610 [2024-10-07 09:56:41.101848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.610 [2024-10-07 09:56:41.101862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.610 [2024-10-07 09:56:41.114132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.114147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.125604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.125623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.131970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.131988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.145123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.145138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.158063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.158078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.168632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.168647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.181537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.181553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.187937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.187952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.200568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.200582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.213547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.213562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.226205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.226219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.237099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.237113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.250177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.250191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.611 [2024-10-07 09:56:41.262524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.611 [2024-10-07 09:56:41.262538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.273362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.273377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.286217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.286231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.298290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.298305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.308500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.308514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.321192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.321207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.333609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.333626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.339716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.339730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.348890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.348907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.361622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.361636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.373772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.373785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.386336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.386350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.396866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.396881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.409476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.409490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.415702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.415716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.428357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.428371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.871 [2024-10-07 09:56:41.441282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.871 [2024-10-07 09:56:41.441296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.872 [2024-10-07 09:56:41.453578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.872 [2024-10-07 09:56:41.453592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.872 [2024-10-07 09:56:41.466180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.872 [2024-10-07 09:56:41.466195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.872 [2024-10-07 09:56:41.477169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.872 [2024-10-07 09:56:41.477183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.872 [2024-10-07 09:56:41.490313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.872 [2024-10-07 09:56:41.490327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.872 [2024-10-07 09:56:41.502137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.872 [2024-10-07 09:56:41.502151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.872 [2024-10-07 09:56:41.513177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.872 [2024-10-07 09:56:41.513191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.872 [2024-10-07 09:56:41.526348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.872 [2024-10-07 09:56:41.526363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 18834.50 IOPS, 147.14 MiB/s [2024-10-07 09:56:41.537576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.537591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.543934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.543948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.556276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.556290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.569386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.569401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.581865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.581878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.593836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.593849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.606669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.606684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.616281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.616295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.629475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.629490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.635754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.635768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.645043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.645057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.657837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.657851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.669660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.669674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.682013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.682027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.694376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.694391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.705611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.132 [2024-10-07 09:56:41.705630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.132 [2024-10-07 09:56:41.718380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.133 [2024-10-07 09:56:41.718394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.133 [2024-10-07 09:56:41.727763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.133 [2024-10-07 09:56:41.727778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.133 [2024-10-07 09:56:41.736723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.133 [2024-10-07 09:56:41.736737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.133 [2024-10-07 09:56:41.749716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.133 [2024-10-07 09:56:41.749730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.133 [2024-10-07 09:56:41.762304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.133 [2024-10-07 09:56:41.762318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.133 [2024-10-07 09:56:41.772074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.133 [2024-10-07 09:56:41.772089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.133 [2024-10-07 09:56:41.785487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.133 [2024-10-07 09:56:41.785502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.797369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.797384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.810174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.810188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.821383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.821397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.833889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.833903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.846452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.846466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.857234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.857249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.870062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.870078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.882514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.882529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.893509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.893524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.906391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.906405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.917743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.917757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.930401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.930415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.941651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.941664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.953994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.954009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.966419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.966433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.976095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.976110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.984294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.984308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:41.997661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:41.997676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:42.003929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:42.003943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:42.016676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:42.016691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:42.029814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:42.029829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:42.042578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:42.042593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.394 [2024-10-07 09:56:42.052259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.394 [2024-10-07 09:56:42.052274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.065516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.065531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.077730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.077744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.090311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.090325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.099993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.100007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.108213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.108227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.121137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.121152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.133595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.133609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.145430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.145444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.158333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.158347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.169183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.169197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.181807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.181822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.193589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.193603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.199789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.199804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.209283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.209298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.222269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.222284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.237214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.237230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.250015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.250029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.261416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.261431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.274424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.274439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.285091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.285106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.297907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.297921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.307397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.307412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.659 [2024-10-07 09:56:42.316876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.659 [2024-10-07 09:56:42.316891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.330011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.330026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.342425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.342440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.353413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.353428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.359742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.359757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.373778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.373792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.385840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.385854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.397795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.397809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.410143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.410158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.421036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.421051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.433901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.433919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.446654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.446673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.456421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.456436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.469467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.469483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.481981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.481995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.494282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.494297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.505269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.505284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.518571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.518585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.533315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.533330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 18848.00 IOPS, 147.25 MiB/s [2024-10-07 09:56:42.546015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.546029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.557445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.557459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.919 [2024-10-07 09:56:42.570145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.919 [2024-10-07 09:56:42.570159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.581326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.581340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.594325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.594339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.606433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.606447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.616414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.616429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.629728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.629741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.641766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.641781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.654189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.654204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.665342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.665361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.677724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.677737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.690109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.690122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.701189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.701204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.713871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.713885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.725080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.725094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.738021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.738035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.750415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.750429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.760885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.760899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.774122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.774135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.785048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.785062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.797982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.797996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.809476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.809490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.822249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.822263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.180 [2024-10-07 09:56:42.834394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.180 [2024-10-07 09:56:42.834408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.440 [2024-10-07 09:56:42.845443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.440 [2024-10-07 09:56:42.845458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.440 [2024-10-07 09:56:42.858102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.440 [2024-10-07 09:56:42.858116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.440 [2024-10-07 09:56:42.870627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.440 [2024-10-07 09:56:42.870641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.440 [2024-10-07 09:56:42.880723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.440 [2024-10-07 09:56:42.880737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.440 [2024-10-07 09:56:42.893832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.440 [2024-10-07 09:56:42.893849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.440 [2024-10-07 09:56:42.906440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:42.906454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:42.917257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:42.917272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:42.930390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:42.930404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:42.940167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:42.940181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:42.953163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:42.953177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:42.965668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:42.965682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:42.978526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:42.978540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:42.987991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:42.988005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:43.001384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:43.001399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:43.013676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:43.013690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:43.026319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:43.026333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:43.035879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:43.035893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:43.043709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:43.043723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:43.052073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:43.052088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:43.065009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:43.065024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:43.078293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:43.078308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:43.089505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:43.089519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.441 [2024-10-07 09:56:43.102341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.441 [2024-10-07 09:56:43.102354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.701 [2024-10-07 09:56:43.111721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.701 [2024-10-07 09:56:43.111736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.701 [2024-10-07 09:56:43.120677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.701 [2024-10-07 09:56:43.120691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.701 [2024-10-07 09:56:43.133752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.701 [2024-10-07 09:56:43.133765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.701 [2024-10-07 09:56:43.145302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.701 [2024-10-07 09:56:43.145316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.701 [2024-10-07 09:56:43.158398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.701 [2024-10-07 09:56:43.158412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.168071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.168085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.180934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.180949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.193420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.193434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.206080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.206094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.217843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.217857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.230628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.230643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.240636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.240650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.253604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.253624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.259972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.259987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.267245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.267259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.276753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.276767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.289787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.289801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.302258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.302272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.313179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.313193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.325863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.325877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.337604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.337622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.349703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.349718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.702 [2024-10-07 09:56:43.355962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.702 [2024-10-07 09:56:43.355976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.364003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.364018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.376976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.376990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.389374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.389388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.402305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.402319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.412178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.412192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.425441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.425456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.437911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.437925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.450292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.450306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.465111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.465126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.477759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.477773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.488887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.488901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.501869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.501883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.514484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.514499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.525673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.525687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 18857.00 IOPS, 147.32 MiB/s [2024-10-07 09:56:43.538369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.538388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.549308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.549323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.562199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.562214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.571614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.571633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.580640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.580655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.594019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.594033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.604892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.604907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.963 [2024-10-07 09:56:43.617967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.963 [2024-10-07 09:56:43.617981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.630371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.630386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.642544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.642559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.653552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.653566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.666595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.666610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.677014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.677028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.689984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.689998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.700175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.700190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.713368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.713382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.725594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.725609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.732018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.732033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.740216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.740230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.753232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.753251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.765812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.765827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.778321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.778335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.790350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.790364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.801307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.801322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.813885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.813898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.823896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.823910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.832732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.832746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.845719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.845732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.858302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.858317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.869644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.869658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.225 [2024-10-07 09:56:43.882448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.225 [2024-10-07 09:56:43.882462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:43.892345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:43.892360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:43.905045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:43.905059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:43.917793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:43.917807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:43.929600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:43.929614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:43.942560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:43.942575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:43.952121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:43.952136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:43.965081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:43.965096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:43.977425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:43.977443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:43.990233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:43.990248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:44.002545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:44.002559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:44.011989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:44.012003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:44.025058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:44.025073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:44.038102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:44.038116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:44.049455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:44.049469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.486 [2024-10-07 09:56:44.062191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.486 [2024-10-07 09:56:44.062205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.487 [2024-10-07 09:56:44.072047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.487 [2024-10-07 09:56:44.072062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.487 [2024-10-07 09:56:44.085103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.487 [2024-10-07 09:56:44.085118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.487 [2024-10-07 09:56:44.097692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.487 [2024-10-07 09:56:44.097706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.487 [2024-10-07 09:56:44.110241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.487 [2024-10-07 09:56:44.110256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.487 [2024-10-07 09:56:44.121166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.487 [2024-10-07 09:56:44.121181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.487 [2024-10-07 09:56:44.134064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.487 [2024-10-07 09:56:44.134078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.487 [2024-10-07 09:56:44.144155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.487 [2024-10-07 09:56:44.144170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.156998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.157013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.169887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.169902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.181855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.181869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.193349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.193363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.205258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.205277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.218055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.218070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.230292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.230307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.240839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.240854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.254426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.254441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.266387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.266401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.276570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.276584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.289574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.289588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.301205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.301220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.314061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.314076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.325320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.325334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.338387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.338401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.348939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.348953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.361748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.361762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.374313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.374327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.384489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.384503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.397656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.397670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.749 [2024-10-07 09:56:44.410286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.749 [2024-10-07 09:56:44.410300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.010 [2024-10-07 09:56:44.421353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.010 [2024-10-07 09:56:44.421368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.010 [2024-10-07 09:56:44.433335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.010 [2024-10-07 09:56:44.433353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.010 [2024-10-07 09:56:44.446060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.010 [2024-10-07 09:56:44.446074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.010 [2024-10-07 09:56:44.456426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.010 [2024-10-07 09:56:44.456441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.010 [2024-10-07 09:56:44.469258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.010 [2024-10-07 09:56:44.469272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.010 [2024-10-07 09:56:44.482278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.010 [2024-10-07 09:56:44.482291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.010 [2024-10-07 09:56:44.492621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.010 [2024-10-07 09:56:44.492635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.010 [2024-10-07 09:56:44.506034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.010 [2024-10-07 09:56:44.506048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.010 [2024-10-07 09:56:44.517179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.010 [2024-10-07 09:56:44.517193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.530013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.530027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 18868.00 IOPS, 147.41 MiB/s [2024-10-07 09:56:44.539907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.539922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 00:35:45.011 Latency(us) 00:35:45.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.011 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:45.011 Nvme1n1 : 5.01 18870.08 147.42 0.00 0.00 6777.58 2635.09 11304.96 00:35:45.011 =================================================================================================================== 00:35:45.011 Total : 18870.08 147.42 0.00 0.00 6777.58 2635.09 11304.96 00:35:45.011 [2024-10-07 09:56:44.545459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.545473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.553457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.553467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.561458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.561470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.573461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.573475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.581461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.581472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.589458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.589468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.597455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.597465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.605454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.605463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.613454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.613463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.621454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.621463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.629455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.629464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.637454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.637463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.645456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.645466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.653455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.653462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 [2024-10-07 09:56:44.661454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.011 [2024-10-07 09:56:44.661461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3623903) - No such process 00:35:45.011 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3623903 00:35:45.011 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:45.011 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:35:45.011 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:45.271 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:35:45.271 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:45.271 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:35:45.272 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:45.272 delay0 00:35:45.272 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:35:45.272 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:45.272 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@564 -- # xtrace_disable 00:35:45.272 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:45.272 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:35:45.272 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:45.272 [2024-10-07 09:56:44.760537] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:53.419 Initializing NVMe Controllers 00:35:53.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:53.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:53.419 Initialization complete. Launching workers. 00:35:53.419 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6129 00:35:53.419 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6412, failed to submit 37 00:35:53.419 success 6216, unsuccessful 196, failed 0 00:35:53.419 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:53.419 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:53.419 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:53.419 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:53.419 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.419 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:53.419 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.419 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.419 rmmod nvme_tcp 00:35:53.419 rmmod nvme_fabrics 00:35:53.419 rmmod nvme_keyring 00:35:53.419 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:53.419 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3621702 ']' 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3621702 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' -z 3621702 ']' 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # kill -0 3621702 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # uname 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3621702 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3621702' 00:35:53.420 killing process with pid 3621702 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # kill 3621702 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@977 -- # wait 3621702 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.420 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:54.364 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:54.364 00:35:54.364 real 0m34.430s 00:35:54.364 user 0m42.947s 00:35:54.364 sys 0m12.581s 00:35:54.364 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # xtrace_disable 00:35:54.364 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:54.364 ************************************ 00:35:54.364 END TEST nvmf_zcopy 00:35:54.364 ************************************ 00:35:54.626 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:54.626 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:35:54.626 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:35:54.626 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:54.626 ************************************ 00:35:54.626 START TEST nvmf_nmic 00:35:54.626 ************************************ 00:35:54.626 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:54.626 * Looking for test storage... 00:35:54.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:54.626 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:35:54.626 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:35:54.626 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1626 -- # lcov --version 00:35:54.888 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:35:54.888 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:54.888 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:54.888 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:54.888 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:54.888 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:35:54.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.889 --rc genhtml_branch_coverage=1 00:35:54.889 --rc genhtml_function_coverage=1 00:35:54.889 --rc genhtml_legend=1 00:35:54.889 --rc geninfo_all_blocks=1 00:35:54.889 --rc geninfo_unexecuted_blocks=1 00:35:54.889 00:35:54.889 ' 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:35:54.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.889 --rc genhtml_branch_coverage=1 00:35:54.889 --rc genhtml_function_coverage=1 00:35:54.889 --rc genhtml_legend=1 00:35:54.889 --rc geninfo_all_blocks=1 00:35:54.889 --rc geninfo_unexecuted_blocks=1 00:35:54.889 00:35:54.889 ' 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:35:54.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.889 --rc genhtml_branch_coverage=1 00:35:54.889 --rc genhtml_function_coverage=1 00:35:54.889 --rc genhtml_legend=1 00:35:54.889 --rc geninfo_all_blocks=1 00:35:54.889 --rc geninfo_unexecuted_blocks=1 00:35:54.889 00:35:54.889 ' 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:35:54.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.889 --rc genhtml_branch_coverage=1 00:35:54.889 --rc genhtml_function_coverage=1 00:35:54.889 --rc genhtml_legend=1 00:35:54.889 --rc geninfo_all_blocks=1 00:35:54.889 --rc geninfo_unexecuted_blocks=1 00:35:54.889 00:35:54.889 ' 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.889 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:54.890 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:03.036 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:03.036 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:03.036 Found net devices under 0000:31:00.0: cvl_0_0 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:03.036 Found net devices under 0000:31:00.1: cvl_0_1 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:03.036 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:03.037 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:03.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:36:03.037 00:36:03.037 --- 10.0.0.2 ping statistics --- 00:36:03.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.037 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:03.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:36:03.037 00:36:03.037 --- 10.0.0.1 ping statistics --- 00:36:03.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.037 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3630646 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3630646 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@834 -- # '[' -z 3630646 ']' 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local max_retries=100 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@843 -- # xtrace_disable 00:36:03.037 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.037 [2024-10-07 09:57:02.213832] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:03.037 [2024-10-07 09:57:02.214994] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:36:03.037 [2024-10-07 09:57:02.215042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.037 [2024-10-07 09:57:02.305991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:03.037 [2024-10-07 09:57:02.403591] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.037 [2024-10-07 09:57:02.403666] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.037 [2024-10-07 09:57:02.403676] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.037 [2024-10-07 09:57:02.403683] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.037 [2024-10-07 09:57:02.403690] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.037 [2024-10-07 09:57:02.405756] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.037 [2024-10-07 09:57:02.405916] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:03.037 [2024-10-07 09:57:02.406080] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:36:03.037 [2024-10-07 09:57:02.406081] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.037 [2024-10-07 09:57:02.494917] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:03.037 [2024-10-07 09:57:02.495918] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:03.037 [2024-10-07 09:57:02.496147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:03.037 [2024-10-07 09:57:02.496461] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:03.037 [2024-10-07 09:57:02.496528] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:03.610 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:36:03.610 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@867 -- # return 0 00:36:03.610 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:03.610 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@733 -- # xtrace_disable 00:36:03.610 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.610 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.611 [2024-10-07 09:57:03.079100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.611 Malloc0 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.611 [2024-10-07 09:57:03.163242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:03.611 test case1: single bdev can't be used in multiple subsystems 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.611 [2024-10-07 09:57:03.198711] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:03.611 [2024-10-07 09:57:03.198737] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:03.611 [2024-10-07 09:57:03.198745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:03.611 request: 00:36:03.611 { 00:36:03.611 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:03.611 "namespace": { 00:36:03.611 "bdev_name": "Malloc0", 00:36:03.611 "no_auto_visible": false 00:36:03.611 }, 00:36:03.611 "method": "nvmf_subsystem_add_ns", 00:36:03.611 "req_id": 1 00:36:03.611 } 00:36:03.611 Got JSON-RPC error response 00:36:03.611 response: 00:36:03.611 { 00:36:03.611 "code": -32602, 00:36:03.611 "message": "Invalid parameters" 00:36:03.611 } 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:03.611 Adding namespace failed - expected result. 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:03.611 test case2: host connect to nvmf target in multiple paths 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:03.611 [2024-10-07 09:57:03.210873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:03.611 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:04.184 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:04.445 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:04.445 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local i=0 00:36:04.445 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:36:04.445 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:36:04.445 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # sleep 2 00:36:06.994 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:36:06.994 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:36:06.994 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:36:06.994 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:36:06.994 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:36:06.994 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # return 0 00:36:06.994 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:06.994 [global] 00:36:06.994 thread=1 00:36:06.994 invalidate=1 00:36:06.994 rw=write 00:36:06.994 time_based=1 00:36:06.994 runtime=1 00:36:06.994 ioengine=libaio 00:36:06.994 direct=1 00:36:06.994 bs=4096 00:36:06.994 iodepth=1 00:36:06.994 norandommap=0 00:36:06.994 numjobs=1 00:36:06.994 00:36:06.994 verify_dump=1 00:36:06.994 verify_backlog=512 00:36:06.994 verify_state_save=0 00:36:06.994 do_verify=1 00:36:06.994 verify=crc32c-intel 00:36:06.994 [job0] 00:36:06.994 filename=/dev/nvme0n1 00:36:06.994 Could not set queue depth (nvme0n1) 00:36:06.994 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:06.994 fio-3.35 00:36:06.994 Starting 1 thread 00:36:08.381 00:36:08.381 job0: (groupid=0, jobs=1): err= 0: pid=3631621: Mon Oct 7 09:57:07 2024 00:36:08.381 read: IOPS=16, BW=66.4KiB/s (68.0kB/s)(68.0KiB/1024msec) 00:36:08.381 slat (nsec): min=26119, max=27750, avg=26841.59, stdev=454.23 00:36:08.381 clat (usec): min=1079, max=42032, avg=39516.83, stdev=9906.79 00:36:08.381 lat (usec): min=1105, max=42058, avg=39543.67, stdev=9906.90 00:36:08.381 clat percentiles (usec): 00:36:08.381 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41681], 00:36:08.381 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:36:08.381 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:08.381 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:08.381 | 99.99th=[42206] 00:36:08.381 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:36:08.381 slat (usec): min=9, max=28404, avg=85.41, stdev=1254.01 00:36:08.381 clat (usec): min=214, max=925, avg=594.21, stdev=96.37 00:36:08.381 lat (usec): min=226, max=28994, avg=679.62, stdev=1257.97 00:36:08.381 clat percentiles (usec): 00:36:08.381 | 1.00th=[ 392], 5.00th=[ 412], 10.00th=[ 465], 20.00th=[ 506], 00:36:08.381 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:36:08.381 | 70.00th=[ 668], 80.00th=[ 685], 90.00th=[ 709], 95.00th=[ 725], 00:36:08.381 | 99.00th=[ 783], 99.50th=[ 783], 99.90th=[ 930], 99.95th=[ 930], 00:36:08.381 | 99.99th=[ 930] 00:36:08.381 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:08.381 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:08.381 lat (usec) : 250=0.19%, 500=17.77%, 750=76.18%, 1000=2.65% 00:36:08.381 lat (msec) : 2=0.19%, 50=3.02% 00:36:08.381 cpu : usr=0.68%, sys=1.56%, ctx=534, majf=0, minf=1 00:36:08.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.381 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:08.381 00:36:08.381 Run status group 0 (all jobs): 00:36:08.381 READ: bw=66.4KiB/s (68.0kB/s), 66.4KiB/s-66.4KiB/s (68.0kB/s-68.0kB/s), io=68.0KiB (69.6kB), run=1024-1024msec 00:36:08.381 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=2048KiB (2097kB), run=1024-1024msec 00:36:08.381 00:36:08.381 Disk stats (read/write): 00:36:08.381 nvme0n1: ios=40/512, merge=0/0, ticks=1531/294, in_queue=1825, util=98.70% 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:08.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # local i=0 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1234 -- # return 0 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:08.381 rmmod nvme_tcp 00:36:08.381 rmmod nvme_fabrics 00:36:08.381 rmmod nvme_keyring 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3630646 ']' 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3630646 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' -z 3630646 ']' 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # kill -0 3630646 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # uname 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:36:08.381 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3630646 00:36:08.381 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:36:08.381 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:36:08.381 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3630646' 00:36:08.381 killing process with pid 3630646 00:36:08.381 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # kill 3630646 00:36:08.381 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@977 -- # wait 3630646 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:08.643 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.191 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:11.191 00:36:11.191 real 0m16.139s 00:36:11.191 user 0m37.116s 00:36:11.191 sys 0m7.728s 00:36:11.191 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # xtrace_disable 00:36:11.191 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:11.191 ************************************ 00:36:11.191 END TEST nvmf_nmic 00:36:11.191 ************************************ 00:36:11.191 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:11.191 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:36:11.191 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:36:11.191 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:11.191 ************************************ 00:36:11.191 START TEST nvmf_fio_target 00:36:11.191 ************************************ 00:36:11.191 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:11.191 * Looking for test storage... 00:36:11.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1626 -- # lcov --version 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:36:11.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.192 --rc genhtml_branch_coverage=1 00:36:11.192 --rc genhtml_function_coverage=1 00:36:11.192 --rc genhtml_legend=1 00:36:11.192 --rc geninfo_all_blocks=1 00:36:11.192 --rc geninfo_unexecuted_blocks=1 00:36:11.192 00:36:11.192 ' 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:36:11.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.192 --rc genhtml_branch_coverage=1 00:36:11.192 --rc genhtml_function_coverage=1 00:36:11.192 --rc genhtml_legend=1 00:36:11.192 --rc geninfo_all_blocks=1 00:36:11.192 --rc geninfo_unexecuted_blocks=1 00:36:11.192 00:36:11.192 ' 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:36:11.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.192 --rc genhtml_branch_coverage=1 00:36:11.192 --rc genhtml_function_coverage=1 00:36:11.192 --rc genhtml_legend=1 00:36:11.192 --rc geninfo_all_blocks=1 00:36:11.192 --rc geninfo_unexecuted_blocks=1 00:36:11.192 00:36:11.192 ' 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:36:11.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.192 --rc genhtml_branch_coverage=1 00:36:11.192 --rc genhtml_function_coverage=1 00:36:11.192 --rc genhtml_legend=1 00:36:11.192 --rc geninfo_all_blocks=1 00:36:11.192 --rc geninfo_unexecuted_blocks=1 00:36:11.192 00:36:11.192 ' 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:11.192 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:11.193 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:19.339 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:19.339 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:19.339 Found net devices under 0000:31:00.0: cvl_0_0 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:19.339 Found net devices under 0000:31:00.1: cvl_0_1 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:19.339 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:19.340 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:19.340 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:19.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:19.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:36:19.340 00:36:19.340 --- 10.0.0.2 ping statistics --- 00:36:19.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.340 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:19.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:19.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:36:19.340 00:36:19.340 --- 10.0.0.1 ping statistics --- 00:36:19.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.340 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3636538 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3636538 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@834 -- # '[' -z 3636538 ']' 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local max_retries=100 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@843 -- # xtrace_disable 00:36:19.340 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.340 [2024-10-07 09:57:18.389416] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:19.340 [2024-10-07 09:57:18.390555] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:36:19.340 [2024-10-07 09:57:18.390608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.340 [2024-10-07 09:57:18.477675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:19.340 [2024-10-07 09:57:18.574534] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.340 [2024-10-07 09:57:18.574592] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.340 [2024-10-07 09:57:18.574601] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.340 [2024-10-07 09:57:18.574608] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.340 [2024-10-07 09:57:18.574615] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.340 [2024-10-07 09:57:18.576665] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.340 [2024-10-07 09:57:18.576913] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:19.340 [2024-10-07 09:57:18.577060] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.340 [2024-10-07 09:57:18.577060] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:36:19.340 [2024-10-07 09:57:18.672026] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:19.340 [2024-10-07 09:57:18.672924] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:19.340 [2024-10-07 09:57:18.673273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:19.340 [2024-10-07 09:57:18.673635] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:19.340 [2024-10-07 09:57:18.673703] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:19.601 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:36:19.601 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@867 -- # return 0 00:36:19.601 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:19.601 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@733 -- # xtrace_disable 00:36:19.601 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.601 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:19.601 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:19.863 [2024-10-07 09:57:19.414007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.863 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:20.124 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:20.124 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:20.386 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:20.386 09:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:20.647 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:20.647 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:20.647 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:20.647 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:20.908 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:21.170 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:21.170 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:21.432 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:21.432 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:21.693 09:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:21.693 09:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:21.693 09:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:21.953 09:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:21.953 09:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:22.214 09:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:22.214 09:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:22.476 09:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:22.476 [2024-10-07 09:57:22.045968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:22.476 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:22.738 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:22.999 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:23.260 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:23.260 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local i=0 00:36:23.260 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:36:23.260 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # [[ -n 4 ]] 00:36:23.260 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_device_counter=4 00:36:23.260 09:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # sleep 2 00:36:25.805 09:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:36:25.805 09:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:36:25.805 09:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:36:25.805 09:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # nvme_devices=4 00:36:25.805 09:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:36:25.805 09:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # return 0 00:36:25.805 09:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:25.805 [global] 00:36:25.805 thread=1 00:36:25.805 invalidate=1 00:36:25.805 rw=write 00:36:25.805 time_based=1 00:36:25.805 runtime=1 00:36:25.805 ioengine=libaio 00:36:25.805 direct=1 00:36:25.805 bs=4096 00:36:25.805 iodepth=1 00:36:25.805 norandommap=0 00:36:25.805 numjobs=1 00:36:25.805 00:36:25.805 verify_dump=1 00:36:25.805 verify_backlog=512 00:36:25.805 verify_state_save=0 00:36:25.805 do_verify=1 00:36:25.805 verify=crc32c-intel 00:36:25.805 [job0] 00:36:25.805 filename=/dev/nvme0n1 00:36:25.805 [job1] 00:36:25.805 filename=/dev/nvme0n2 00:36:25.805 [job2] 00:36:25.805 filename=/dev/nvme0n3 00:36:25.805 [job3] 00:36:25.805 filename=/dev/nvme0n4 00:36:25.805 Could not set queue depth (nvme0n1) 00:36:25.805 Could not set queue depth (nvme0n2) 00:36:25.805 Could not set queue depth (nvme0n3) 00:36:25.805 Could not set queue depth (nvme0n4) 00:36:25.805 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.805 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.805 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.805 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.805 fio-3.35 00:36:25.805 Starting 4 threads 00:36:27.215 00:36:27.216 job0: (groupid=0, jobs=1): err= 0: pid=3638049: Mon Oct 7 09:57:26 2024 00:36:27.216 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1016msec) 00:36:27.216 slat (nsec): min=26410, max=26997, avg=26645.90, stdev=165.26 00:36:27.216 clat (usec): min=999, max=41283, avg=38980.06, stdev=8940.33 00:36:27.216 lat (usec): min=1026, max=41310, avg=39006.70, stdev=8940.34 00:36:27.216 clat percentiles (usec): 00:36:27.216 | 1.00th=[ 996], 5.00th=[ 996], 10.00th=[40633], 20.00th=[40633], 00:36:27.216 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:27.216 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:27.216 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:27.216 | 99.99th=[41157] 00:36:27.216 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:36:27.216 slat (nsec): min=10015, max=54398, avg=31171.90, stdev=9355.12 00:36:27.216 clat (usec): min=116, max=837, avg=417.82, stdev=124.34 00:36:27.216 lat (usec): min=128, max=872, avg=448.99, stdev=126.72 00:36:27.216 clat percentiles (usec): 00:36:27.216 | 1.00th=[ 127], 5.00th=[ 223], 10.00th=[ 277], 20.00th=[ 322], 00:36:27.216 | 30.00th=[ 343], 40.00th=[ 363], 50.00th=[ 416], 60.00th=[ 441], 00:36:27.216 | 70.00th=[ 478], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 627], 00:36:27.216 | 99.00th=[ 717], 99.50th=[ 758], 99.90th=[ 840], 99.95th=[ 840], 00:36:27.216 | 99.99th=[ 840] 00:36:27.216 bw ( KiB/s): min= 4096, max= 4096, per=37.36%, avg=4096.00, stdev= 0.00, samples=1 00:36:27.216 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:27.216 lat (usec) : 250=7.52%, 500=62.97%, 750=25.19%, 1000=0.75% 00:36:27.216 lat (msec) : 50=3.57% 00:36:27.216 cpu : usr=1.08%, sys=1.18%, ctx=534, majf=0, minf=1 00:36:27.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.216 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:27.216 job1: (groupid=0, jobs=1): err= 0: pid=3638050: Mon Oct 7 09:57:26 2024 00:36:27.216 read: IOPS=619, BW=2478KiB/s (2537kB/s)(2480KiB/1001msec) 00:36:27.216 slat (nsec): min=7035, max=44539, avg=24637.79, stdev=6884.42 00:36:27.216 clat (usec): min=321, max=41771, avg=968.13, stdev=2308.95 00:36:27.216 lat (usec): min=328, max=41798, avg=992.76, stdev=2309.12 00:36:27.216 clat percentiles (usec): 00:36:27.216 | 1.00th=[ 506], 5.00th=[ 611], 10.00th=[ 652], 20.00th=[ 717], 00:36:27.216 | 30.00th=[ 758], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 881], 00:36:27.216 | 70.00th=[ 914], 80.00th=[ 955], 90.00th=[ 996], 95.00th=[ 1057], 00:36:27.216 | 99.00th=[ 1352], 99.50th=[ 1385], 99.90th=[41681], 99.95th=[41681], 00:36:27.216 | 99.99th=[41681] 00:36:27.216 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:36:27.216 slat (usec): min=5, max=2007, avg=22.79, stdev=63.34 00:36:27.216 clat (usec): min=118, max=1731, avg=340.71, stdev=121.13 00:36:27.216 lat (usec): min=128, max=2510, avg=363.50, stdev=143.77 00:36:27.216 clat percentiles (usec): 00:36:27.216 | 1.00th=[ 131], 5.00th=[ 153], 10.00th=[ 212], 20.00th=[ 253], 00:36:27.216 | 30.00th=[ 273], 40.00th=[ 297], 50.00th=[ 326], 60.00th=[ 351], 00:36:27.216 | 70.00th=[ 392], 80.00th=[ 437], 90.00th=[ 490], 95.00th=[ 553], 00:36:27.216 | 99.00th=[ 627], 99.50th=[ 693], 99.90th=[ 881], 99.95th=[ 1729], 00:36:27.216 | 99.99th=[ 1729] 00:36:27.216 bw ( KiB/s): min= 4096, max= 4096, per=37.36%, avg=4096.00, stdev= 0.00, samples=1 00:36:27.216 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:27.216 lat (usec) : 250=11.31%, 500=45.80%, 750=15.51%, 1000=24.03% 00:36:27.216 lat (msec) : 2=3.22%, 50=0.12% 00:36:27.216 cpu : usr=2.50%, sys=3.20%, ctx=1647, majf=0, minf=1 00:36:27.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.216 issued rwts: total=620,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:27.216 job2: (groupid=0, jobs=1): err= 0: pid=3638052: Mon Oct 7 09:57:26 2024 00:36:27.216 read: IOPS=25, BW=104KiB/s (106kB/s)(108KiB/1039msec) 00:36:27.216 slat (nsec): min=10100, max=26803, avg=25697.52, stdev=3124.48 00:36:27.216 clat (usec): min=767, max=42241, avg=28109.87, stdev=19644.34 00:36:27.216 lat (usec): min=794, max=42267, avg=28135.56, stdev=19643.92 00:36:27.216 clat percentiles (usec): 00:36:27.216 | 1.00th=[ 766], 5.00th=[ 824], 10.00th=[ 832], 20.00th=[ 873], 00:36:27.216 | 30.00th=[ 922], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:36:27.216 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:27.216 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:27.216 | 99.99th=[42206] 00:36:27.216 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:36:27.216 slat (usec): min=9, max=5836, avg=41.69, stdev=256.86 00:36:27.216 clat (usec): min=116, max=872, avg=492.52, stdev=141.61 00:36:27.216 lat (usec): min=158, max=6493, avg=534.21, stdev=301.43 00:36:27.216 clat percentiles (usec): 00:36:27.216 | 1.00th=[ 174], 5.00th=[ 251], 10.00th=[ 293], 20.00th=[ 375], 00:36:27.216 | 30.00th=[ 420], 40.00th=[ 453], 50.00th=[ 498], 60.00th=[ 529], 00:36:27.216 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 676], 95.00th=[ 709], 00:36:27.216 | 99.00th=[ 775], 99.50th=[ 824], 99.90th=[ 873], 99.95th=[ 873], 00:36:27.216 | 99.99th=[ 873] 00:36:27.216 bw ( KiB/s): min= 4096, max= 4096, per=37.36%, avg=4096.00, stdev= 0.00, samples=1 00:36:27.216 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:27.216 lat (usec) : 250=4.64%, 500=43.23%, 750=44.34%, 1000=4.45% 00:36:27.216 lat (msec) : 50=3.34% 00:36:27.216 cpu : usr=0.67%, sys=1.45%, ctx=542, majf=0, minf=1 00:36:27.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.216 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:27.216 job3: (groupid=0, jobs=1): err= 0: pid=3638057: Mon Oct 7 09:57:26 2024 00:36:27.216 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:27.216 slat (nsec): min=6968, max=62506, avg=27383.34, stdev=3828.92 00:36:27.216 clat (usec): min=619, max=1357, avg=947.87, stdev=105.58 00:36:27.216 lat (usec): min=647, max=1384, avg=975.25, stdev=105.89 00:36:27.216 clat percentiles (usec): 00:36:27.216 | 1.00th=[ 668], 5.00th=[ 750], 10.00th=[ 816], 20.00th=[ 873], 00:36:27.216 | 30.00th=[ 914], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 971], 00:36:27.216 | 70.00th=[ 988], 80.00th=[ 1020], 90.00th=[ 1074], 95.00th=[ 1123], 00:36:27.216 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1352], 99.95th=[ 1352], 00:36:27.216 | 99.99th=[ 1352] 00:36:27.216 write: IOPS=799, BW=3197KiB/s (3274kB/s)(3200KiB/1001msec); 0 zone resets 00:36:27.216 slat (nsec): min=9452, max=68387, avg=32565.35, stdev=8773.16 00:36:27.216 clat (usec): min=236, max=952, avg=580.66, stdev=117.95 00:36:27.216 lat (usec): min=248, max=987, avg=613.23, stdev=120.48 00:36:27.216 clat percentiles (usec): 00:36:27.216 | 1.00th=[ 306], 5.00th=[ 383], 10.00th=[ 429], 20.00th=[ 478], 00:36:27.216 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:36:27.216 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 783], 00:36:27.216 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 955], 99.95th=[ 955], 00:36:27.216 | 99.99th=[ 955] 00:36:27.216 bw ( KiB/s): min= 4096, max= 4096, per=37.36%, avg=4096.00, stdev= 0.00, samples=1 00:36:27.216 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:27.216 lat (usec) : 250=0.08%, 500=15.17%, 750=43.06%, 1000=31.71% 00:36:27.216 lat (msec) : 2=9.98% 00:36:27.216 cpu : usr=3.80%, sys=4.30%, ctx=1312, majf=0, minf=2 00:36:27.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.216 issued rwts: total=512,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:27.216 00:36:27.216 Run status group 0 (all jobs): 00:36:27.216 READ: bw=4539KiB/s (4648kB/s), 78.7KiB/s-2478KiB/s (80.6kB/s-2537kB/s), io=4716KiB (4829kB), run=1001-1039msec 00:36:27.216 WRITE: bw=10.7MiB/s (11.2MB/s), 1971KiB/s-4092KiB/s (2018kB/s-4190kB/s), io=11.1MiB (11.7MB), run=1001-1039msec 00:36:27.216 00:36:27.216 Disk stats (read/write): 00:36:27.216 nvme0n1: ios=70/512, merge=0/0, ticks=1005/207, in_queue=1212, util=86.77% 00:36:27.216 nvme0n2: ios=594/1024, merge=0/0, ticks=586/334, in_queue=920, util=87.74% 00:36:27.216 nvme0n3: ios=68/512, merge=0/0, ticks=821/242, in_queue=1063, util=94.93% 00:36:27.216 nvme0n4: ios=569/526, merge=0/0, ticks=583/238, in_queue=821, util=97.32% 00:36:27.216 09:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:27.216 [global] 00:36:27.216 thread=1 00:36:27.216 invalidate=1 00:36:27.216 rw=randwrite 00:36:27.216 time_based=1 00:36:27.216 runtime=1 00:36:27.216 ioengine=libaio 00:36:27.216 direct=1 00:36:27.216 bs=4096 00:36:27.216 iodepth=1 00:36:27.216 norandommap=0 00:36:27.216 numjobs=1 00:36:27.216 00:36:27.216 verify_dump=1 00:36:27.216 verify_backlog=512 00:36:27.216 verify_state_save=0 00:36:27.216 do_verify=1 00:36:27.216 verify=crc32c-intel 00:36:27.216 [job0] 00:36:27.216 filename=/dev/nvme0n1 00:36:27.216 [job1] 00:36:27.216 filename=/dev/nvme0n2 00:36:27.216 [job2] 00:36:27.216 filename=/dev/nvme0n3 00:36:27.216 [job3] 00:36:27.216 filename=/dev/nvme0n4 00:36:27.216 Could not set queue depth (nvme0n1) 00:36:27.216 Could not set queue depth (nvme0n2) 00:36:27.216 Could not set queue depth (nvme0n3) 00:36:27.216 Could not set queue depth (nvme0n4) 00:36:27.485 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.485 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.485 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.485 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.485 fio-3.35 00:36:27.485 Starting 4 threads 00:36:28.894 00:36:28.894 job0: (groupid=0, jobs=1): err= 0: pid=3638574: Mon Oct 7 09:57:28 2024 00:36:28.894 read: IOPS=22, BW=90.8KiB/s (93.0kB/s)(92.0KiB/1013msec) 00:36:28.894 slat (nsec): min=27354, max=28868, avg=27783.48, stdev=370.93 00:36:28.894 clat (usec): min=593, max=42109, avg=31141.16, stdev=18344.76 00:36:28.894 lat (usec): min=621, max=42137, avg=31168.95, stdev=18344.71 00:36:28.894 clat percentiles (usec): 00:36:28.894 | 1.00th=[ 594], 5.00th=[ 898], 10.00th=[ 914], 20.00th=[ 1139], 00:36:28.894 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:36:28.894 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:28.894 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:28.894 | 99.99th=[42206] 00:36:28.894 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:36:28.894 slat (nsec): min=8879, max=53942, avg=30803.72, stdev=10697.21 00:36:28.894 clat (usec): min=111, max=1140, avg=539.98, stdev=177.01 00:36:28.894 lat (usec): min=120, max=1176, avg=570.78, stdev=180.86 00:36:28.894 clat percentiles (usec): 00:36:28.894 | 1.00th=[ 143], 5.00th=[ 249], 10.00th=[ 289], 20.00th=[ 379], 00:36:28.894 | 30.00th=[ 441], 40.00th=[ 502], 50.00th=[ 553], 60.00th=[ 594], 00:36:28.894 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 758], 95.00th=[ 816], 00:36:28.894 | 99.00th=[ 971], 99.50th=[ 988], 99.90th=[ 1139], 99.95th=[ 1139], 00:36:28.894 | 99.99th=[ 1139] 00:36:28.894 bw ( KiB/s): min= 4096, max= 4096, per=47.77%, avg=4096.00, stdev= 0.00, samples=1 00:36:28.894 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:28.894 lat (usec) : 250=4.86%, 500=32.34%, 750=48.41%, 1000=10.47% 00:36:28.894 lat (msec) : 2=0.75%, 50=3.18% 00:36:28.894 cpu : usr=0.69%, sys=2.37%, ctx=536, majf=0, minf=1 00:36:28.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.894 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:28.894 job1: (groupid=0, jobs=1): err= 0: pid=3638575: Mon Oct 7 09:57:28 2024 00:36:28.894 read: IOPS=15, BW=61.8KiB/s (63.3kB/s)(64.0KiB/1036msec) 00:36:28.894 slat (nsec): min=27013, max=27641, avg=27214.31, stdev=195.77 00:36:28.894 clat (usec): min=41020, max=42073, avg=41838.57, stdev=319.11 00:36:28.894 lat (usec): min=41047, max=42100, avg=41865.79, stdev=319.08 00:36:28.894 clat percentiles (usec): 00:36:28.894 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:36:28.894 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:36:28.894 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:28.894 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:28.894 | 99.99th=[42206] 00:36:28.894 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:36:28.894 slat (nsec): min=9726, max=69615, avg=32536.46, stdev=8304.68 00:36:28.894 clat (usec): min=276, max=1206, avg=674.51, stdev=145.75 00:36:28.894 lat (usec): min=312, max=1240, avg=707.05, stdev=148.07 00:36:28.894 clat percentiles (usec): 00:36:28.894 | 1.00th=[ 363], 5.00th=[ 453], 10.00th=[ 498], 20.00th=[ 553], 00:36:28.894 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 701], 00:36:28.894 | 70.00th=[ 734], 80.00th=[ 775], 90.00th=[ 873], 95.00th=[ 947], 00:36:28.894 | 99.00th=[ 1057], 99.50th=[ 1123], 99.90th=[ 1205], 99.95th=[ 1205], 00:36:28.894 | 99.99th=[ 1205] 00:36:28.894 bw ( KiB/s): min= 4096, max= 4096, per=47.77%, avg=4096.00, stdev= 0.00, samples=1 00:36:28.894 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:28.894 lat (usec) : 500=9.85%, 750=61.17%, 1000=24.05% 00:36:28.894 lat (msec) : 2=1.89%, 50=3.03% 00:36:28.894 cpu : usr=0.58%, sys=1.84%, ctx=529, majf=0, minf=1 00:36:28.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.894 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:28.894 job2: (groupid=0, jobs=1): err= 0: pid=3638576: Mon Oct 7 09:57:28 2024 00:36:28.894 read: IOPS=17, BW=70.9KiB/s (72.6kB/s)(72.0KiB/1016msec) 00:36:28.894 slat (nsec): min=26635, max=27747, avg=27088.17, stdev=284.30 00:36:28.894 clat (usec): min=1308, max=42154, avg=39300.44, stdev=9493.83 00:36:28.894 lat (usec): min=1335, max=42181, avg=39327.53, stdev=9493.86 00:36:28.894 clat percentiles (usec): 00:36:28.894 | 1.00th=[ 1303], 5.00th=[ 1303], 10.00th=[40633], 20.00th=[41157], 00:36:28.894 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:36:28.894 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:28.894 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:28.894 | 99.99th=[42206] 00:36:28.894 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:36:28.894 slat (nsec): min=9151, max=73637, avg=31710.12, stdev=9803.83 00:36:28.894 clat (usec): min=222, max=923, avg=562.01, stdev=139.49 00:36:28.894 lat (usec): min=245, max=958, avg=593.72, stdev=141.64 00:36:28.894 clat percentiles (usec): 00:36:28.894 | 1.00th=[ 253], 5.00th=[ 310], 10.00th=[ 379], 20.00th=[ 445], 00:36:28.894 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 594], 00:36:28.894 | 70.00th=[ 635], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 791], 00:36:28.894 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 922], 99.95th=[ 922], 00:36:28.894 | 99.99th=[ 922] 00:36:28.894 bw ( KiB/s): min= 4096, max= 4096, per=47.77%, avg=4096.00, stdev= 0.00, samples=1 00:36:28.894 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:28.894 lat (usec) : 250=0.94%, 500=29.81%, 750=56.98%, 1000=8.87% 00:36:28.894 lat (msec) : 2=0.19%, 50=3.21% 00:36:28.894 cpu : usr=0.99%, sys=2.07%, ctx=530, majf=0, minf=2 00:36:28.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.894 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:28.894 job3: (groupid=0, jobs=1): err= 0: pid=3638577: Mon Oct 7 09:57:28 2024 00:36:28.894 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:28.894 slat (nsec): min=8532, max=59324, avg=15789.05, stdev=8490.41 00:36:28.894 clat (usec): min=532, max=1302, avg=1075.83, stdev=99.38 00:36:28.894 lat (usec): min=541, max=1321, avg=1091.62, stdev=101.14 00:36:28.894 clat percentiles (usec): 00:36:28.894 | 1.00th=[ 791], 5.00th=[ 898], 10.00th=[ 963], 20.00th=[ 1004], 00:36:28.894 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:36:28.894 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:36:28.894 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1303], 99.95th=[ 1303], 00:36:28.894 | 99.99th=[ 1303] 00:36:28.894 write: IOPS=684, BW=2737KiB/s (2803kB/s)(2740KiB/1001msec); 0 zone resets 00:36:28.894 slat (nsec): min=3344, max=51277, avg=15490.35, stdev=9978.49 00:36:28.894 clat (usec): min=210, max=2765, avg=621.24, stdev=163.24 00:36:28.894 lat (usec): min=214, max=2776, avg=636.73, stdev=164.10 00:36:28.894 clat percentiles (usec): 00:36:28.894 | 1.00th=[ 285], 5.00th=[ 375], 10.00th=[ 441], 20.00th=[ 498], 00:36:28.895 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 660], 00:36:28.895 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 832], 00:36:28.895 | 99.00th=[ 971], 99.50th=[ 1012], 99.90th=[ 2769], 99.95th=[ 2769], 00:36:28.895 | 99.99th=[ 2769] 00:36:28.895 bw ( KiB/s): min= 4096, max= 4096, per=47.77%, avg=4096.00, stdev= 0.00, samples=1 00:36:28.895 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:28.895 lat (usec) : 250=0.17%, 500=11.53%, 750=35.67%, 1000=18.13% 00:36:28.895 lat (msec) : 2=34.42%, 4=0.08% 00:36:28.895 cpu : usr=1.30%, sys=1.40%, ctx=1197, majf=0, minf=2 00:36:28.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.895 issued rwts: total=512,685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:28.895 00:36:28.895 Run status group 0 (all jobs): 00:36:28.895 READ: bw=2197KiB/s (2250kB/s), 61.8KiB/s-2046KiB/s (63.3kB/s-2095kB/s), io=2276KiB (2331kB), run=1001-1036msec 00:36:28.895 WRITE: bw=8575KiB/s (8781kB/s), 1977KiB/s-2737KiB/s (2024kB/s-2803kB/s), io=8884KiB (9097kB), run=1001-1036msec 00:36:28.895 00:36:28.895 Disk stats (read/write): 00:36:28.895 nvme0n1: ios=61/512, merge=0/0, ticks=649/207, in_queue=856, util=90.48% 00:36:28.895 nvme0n2: ios=48/512, merge=0/0, ticks=659/339, in_queue=998, util=97.66% 00:36:28.895 nvme0n3: ios=69/512, merge=0/0, ticks=572/240, in_queue=812, util=92.30% 00:36:28.895 nvme0n4: ios=523/512, merge=0/0, ticks=615/317, in_queue=932, util=95.41% 00:36:28.895 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:28.895 [global] 00:36:28.895 thread=1 00:36:28.895 invalidate=1 00:36:28.895 rw=write 00:36:28.895 time_based=1 00:36:28.895 runtime=1 00:36:28.895 ioengine=libaio 00:36:28.895 direct=1 00:36:28.895 bs=4096 00:36:28.895 iodepth=128 00:36:28.895 norandommap=0 00:36:28.895 numjobs=1 00:36:28.895 00:36:28.895 verify_dump=1 00:36:28.895 verify_backlog=512 00:36:28.895 verify_state_save=0 00:36:28.895 do_verify=1 00:36:28.895 verify=crc32c-intel 00:36:28.895 [job0] 00:36:28.895 filename=/dev/nvme0n1 00:36:28.895 [job1] 00:36:28.895 filename=/dev/nvme0n2 00:36:28.895 [job2] 00:36:28.895 filename=/dev/nvme0n3 00:36:28.895 [job3] 00:36:28.895 filename=/dev/nvme0n4 00:36:28.895 Could not set queue depth (nvme0n1) 00:36:28.895 Could not set queue depth (nvme0n2) 00:36:28.895 Could not set queue depth (nvme0n3) 00:36:28.895 Could not set queue depth (nvme0n4) 00:36:29.171 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:29.171 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:29.171 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:29.171 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:29.171 fio-3.35 00:36:29.171 Starting 4 threads 00:36:30.563 00:36:30.563 job0: (groupid=0, jobs=1): err= 0: pid=3639100: Mon Oct 7 09:57:29 2024 00:36:30.563 read: IOPS=6792, BW=26.5MiB/s (27.8MB/s)(26.6MiB/1004msec) 00:36:30.563 slat (nsec): min=981, max=9852.3k, avg=70097.64, stdev=526450.30 00:36:30.563 clat (usec): min=2934, max=34121, avg=9178.25, stdev=4218.36 00:36:30.563 lat (usec): min=3816, max=34122, avg=9248.35, stdev=4253.29 00:36:30.563 clat percentiles (usec): 00:36:30.563 | 1.00th=[ 4178], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 5932], 00:36:30.563 | 30.00th=[ 6456], 40.00th=[ 7111], 50.00th=[ 8094], 60.00th=[ 8717], 00:36:30.563 | 70.00th=[ 9765], 80.00th=[12256], 90.00th=[14877], 95.00th=[17957], 00:36:30.563 | 99.00th=[23725], 99.50th=[24773], 99.90th=[33162], 99.95th=[34341], 00:36:30.563 | 99.99th=[34341] 00:36:30.563 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:36:30.563 slat (nsec): min=1615, max=15703k, avg=68140.59, stdev=554083.83 00:36:30.563 clat (usec): min=1158, max=34852, avg=9040.38, stdev=4969.38 00:36:30.563 lat (usec): min=1168, max=34857, avg=9108.52, stdev=4994.24 00:36:30.563 clat percentiles (usec): 00:36:30.563 | 1.00th=[ 3851], 5.00th=[ 4228], 10.00th=[ 4621], 20.00th=[ 5342], 00:36:30.563 | 30.00th=[ 6128], 40.00th=[ 6783], 50.00th=[ 7832], 60.00th=[ 8291], 00:36:30.563 | 70.00th=[ 9372], 80.00th=[11863], 90.00th=[16909], 95.00th=[18220], 00:36:30.563 | 99.00th=[31065], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:36:30.563 | 99.99th=[34866] 00:36:30.563 bw ( KiB/s): min=22339, max=34960, per=31.84%, avg=28649.50, stdev=8924.39, samples=2 00:36:30.563 iops : min= 5584, max= 8740, avg=7162.00, stdev=2231.63, samples=2 00:36:30.563 lat (msec) : 2=0.01%, 4=1.44%, 10=70.84%, 20=25.22%, 50=2.48% 00:36:30.563 cpu : usr=4.79%, sys=6.68%, ctx=338, majf=0, minf=1 00:36:30.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:36:30.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:30.563 issued rwts: total=6820,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.563 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:30.563 job1: (groupid=0, jobs=1): err= 0: pid=3639101: Mon Oct 7 09:57:29 2024 00:36:30.563 read: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec) 00:36:30.563 slat (nsec): min=892, max=6348.0k, avg=63988.26, stdev=424149.58 00:36:30.563 clat (usec): min=3388, max=28694, avg=8441.61, stdev=3155.50 00:36:30.563 lat (usec): min=3390, max=28699, avg=8505.59, stdev=3184.89 00:36:30.563 clat percentiles (usec): 00:36:30.563 | 1.00th=[ 4178], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6194], 00:36:30.563 | 30.00th=[ 6652], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8356], 00:36:30.563 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[12387], 95.00th=[14484], 00:36:30.563 | 99.00th=[20317], 99.50th=[24249], 99.90th=[28181], 99.95th=[28705], 00:36:30.563 | 99.99th=[28705] 00:36:30.563 write: IOPS=7806, BW=30.5MiB/s (32.0MB/s)(30.6MiB/1005msec); 0 zone resets 00:36:30.563 slat (nsec): min=1537, max=6589.5k, avg=60118.18, stdev=390287.15 00:36:30.563 clat (usec): min=1345, max=28679, avg=7960.92, stdev=4121.51 00:36:30.563 lat (usec): min=1352, max=28681, avg=8021.04, stdev=4144.22 00:36:30.563 clat percentiles (usec): 00:36:30.563 | 1.00th=[ 3589], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4883], 00:36:30.563 | 30.00th=[ 5407], 40.00th=[ 6194], 50.00th=[ 7046], 60.00th=[ 7767], 00:36:30.563 | 70.00th=[ 8225], 80.00th=[ 9372], 90.00th=[14353], 95.00th=[16319], 00:36:30.563 | 99.00th=[24511], 99.50th=[26346], 99.90th=[27395], 99.95th=[27395], 00:36:30.563 | 99.99th=[28705] 00:36:30.563 bw ( KiB/s): min=29664, max=32088, per=34.32%, avg=30876.00, stdev=1714.03, samples=2 00:36:30.563 iops : min= 7416, max= 8022, avg=7719.00, stdev=428.51, samples=2 00:36:30.563 lat (msec) : 2=0.10%, 4=1.70%, 10=80.50%, 20=15.67%, 50=2.03% 00:36:30.563 cpu : usr=5.28%, sys=7.07%, ctx=480, majf=0, minf=1 00:36:30.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:30.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:30.563 issued rwts: total=7680,7846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.563 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:30.563 job2: (groupid=0, jobs=1): err= 0: pid=3639103: Mon Oct 7 09:57:29 2024 00:36:30.563 read: IOPS=3129, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1009msec) 00:36:30.563 slat (nsec): min=1018, max=13595k, avg=120851.79, stdev=783078.46 00:36:30.563 clat (usec): min=4573, max=58637, avg=14144.60, stdev=7650.21 00:36:30.563 lat (usec): min=4592, max=58645, avg=14265.45, stdev=7728.08 00:36:30.564 clat percentiles (usec): 00:36:30.564 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 7046], 20.00th=[ 8979], 00:36:30.564 | 30.00th=[10159], 40.00th=[11338], 50.00th=[12256], 60.00th=[13042], 00:36:30.564 | 70.00th=[15533], 80.00th=[17957], 90.00th=[22414], 95.00th=[28443], 00:36:30.564 | 99.00th=[45876], 99.50th=[49546], 99.90th=[58459], 99.95th=[58459], 00:36:30.564 | 99.99th=[58459] 00:36:30.564 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:36:30.564 slat (nsec): min=1676, max=12449k, avg=167378.91, stdev=845092.71 00:36:30.564 clat (usec): min=1235, max=62191, avg=23248.90, stdev=17820.41 00:36:30.564 lat (usec): min=1247, max=62200, avg=23416.28, stdev=17933.88 00:36:30.564 clat percentiles (usec): 00:36:30.564 | 1.00th=[ 6456], 5.00th=[ 7046], 10.00th=[ 8160], 20.00th=[ 9765], 00:36:30.564 | 30.00th=[11600], 40.00th=[12256], 50.00th=[13698], 60.00th=[16057], 00:36:30.564 | 70.00th=[24511], 80.00th=[51119], 90.00th=[54264], 95.00th=[55313], 00:36:30.564 | 99.00th=[57410], 99.50th=[58459], 99.90th=[62129], 99.95th=[62129], 00:36:30.564 | 99.99th=[62129] 00:36:30.564 bw ( KiB/s): min= 9040, max=19304, per=15.75%, avg=14172.00, stdev=7257.74, samples=2 00:36:30.564 iops : min= 2260, max= 4826, avg=3543.00, stdev=1814.44, samples=2 00:36:30.564 lat (msec) : 2=0.03%, 4=0.03%, 10=24.62%, 20=50.04%, 50=13.78% 00:36:30.564 lat (msec) : 100=11.50% 00:36:30.564 cpu : usr=3.08%, sys=3.67%, ctx=304, majf=0, minf=1 00:36:30.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:36:30.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:30.564 issued rwts: total=3158,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:30.564 job3: (groupid=0, jobs=1): err= 0: pid=3639104: Mon Oct 7 09:57:29 2024 00:36:30.564 read: IOPS=3851, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1006msec) 00:36:30.564 slat (nsec): min=945, max=8802.6k, avg=94157.75, stdev=575183.38 00:36:30.564 clat (usec): min=1413, max=43076, avg=12362.20, stdev=6758.88 00:36:30.564 lat (usec): min=1971, max=44992, avg=12456.36, stdev=6799.17 00:36:30.564 clat percentiles (usec): 00:36:30.564 | 1.00th=[ 2245], 5.00th=[ 4883], 10.00th=[ 6325], 20.00th=[ 8029], 00:36:30.564 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[11338], 00:36:30.564 | 70.00th=[13173], 80.00th=[17171], 90.00th=[21365], 95.00th=[25560], 00:36:30.564 | 99.00th=[36439], 99.50th=[37487], 99.90th=[43254], 99.95th=[43254], 00:36:30.564 | 99.99th=[43254] 00:36:30.564 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:36:30.564 slat (nsec): min=1630, max=14806k, avg=146609.66, stdev=756348.21 00:36:30.564 clat (usec): min=722, max=61094, avg=19325.83, stdev=17175.37 00:36:30.564 lat (usec): min=733, max=61107, avg=19472.44, stdev=17302.08 00:36:30.564 clat percentiles (usec): 00:36:30.564 | 1.00th=[ 2376], 5.00th=[ 5407], 10.00th=[ 7439], 20.00th=[ 8160], 00:36:30.564 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[11600], 60.00th=[14222], 00:36:30.564 | 70.00th=[16319], 80.00th=[31065], 90.00th=[54789], 95.00th=[55837], 00:36:30.564 | 99.00th=[59507], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:36:30.564 | 99.99th=[61080] 00:36:30.564 bw ( KiB/s): min=14664, max=18067, per=18.19%, avg=16365.50, stdev=2406.28, samples=2 00:36:30.564 iops : min= 3666, max= 4516, avg=4091.00, stdev=601.04, samples=2 00:36:30.564 lat (usec) : 750=0.01%, 1000=0.03% 00:36:30.564 lat (msec) : 2=0.25%, 4=2.82%, 10=40.76%, 20=37.42%, 50=10.73% 00:36:30.564 lat (msec) : 100=7.98% 00:36:30.564 cpu : usr=3.08%, sys=4.28%, ctx=387, majf=0, minf=2 00:36:30.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:30.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:30.564 issued rwts: total=3875,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:30.564 00:36:30.564 Run status group 0 (all jobs): 00:36:30.564 READ: bw=83.4MiB/s (87.4MB/s), 12.2MiB/s-29.9MiB/s (12.8MB/s-31.3MB/s), io=84.1MiB (88.2MB), run=1004-1009msec 00:36:30.564 WRITE: bw=87.9MiB/s (92.1MB/s), 13.9MiB/s-30.5MiB/s (14.5MB/s-32.0MB/s), io=88.6MiB (93.0MB), run=1004-1009msec 00:36:30.564 00:36:30.564 Disk stats (read/write): 00:36:30.564 nvme0n1: ios=4728/5120, merge=0/0, ticks=44877/50170, in_queue=95047, util=81.86% 00:36:30.564 nvme0n2: ios=6286/6656, merge=0/0, ticks=39257/35165, in_queue=74422, util=90.26% 00:36:30.564 nvme0n3: ios=2837/3072, merge=0/0, ticks=37954/57563, in_queue=95517, util=86.44% 00:36:30.564 nvme0n4: ios=2577/2823, merge=0/0, ticks=17524/32189, in_queue=49713, util=99.66% 00:36:30.564 09:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:30.564 [global] 00:36:30.564 thread=1 00:36:30.564 invalidate=1 00:36:30.564 rw=randwrite 00:36:30.564 time_based=1 00:36:30.564 runtime=1 00:36:30.564 ioengine=libaio 00:36:30.564 direct=1 00:36:30.564 bs=4096 00:36:30.564 iodepth=128 00:36:30.564 norandommap=0 00:36:30.564 numjobs=1 00:36:30.564 00:36:30.564 verify_dump=1 00:36:30.564 verify_backlog=512 00:36:30.564 verify_state_save=0 00:36:30.564 do_verify=1 00:36:30.564 verify=crc32c-intel 00:36:30.564 [job0] 00:36:30.564 filename=/dev/nvme0n1 00:36:30.564 [job1] 00:36:30.564 filename=/dev/nvme0n2 00:36:30.564 [job2] 00:36:30.564 filename=/dev/nvme0n3 00:36:30.564 [job3] 00:36:30.564 filename=/dev/nvme0n4 00:36:30.564 Could not set queue depth (nvme0n1) 00:36:30.564 Could not set queue depth (nvme0n2) 00:36:30.564 Could not set queue depth (nvme0n3) 00:36:30.564 Could not set queue depth (nvme0n4) 00:36:30.834 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:30.834 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:30.834 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:30.834 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:30.834 fio-3.35 00:36:30.834 Starting 4 threads 00:36:32.221 00:36:32.221 job0: (groupid=0, jobs=1): err= 0: pid=3639619: Mon Oct 7 09:57:31 2024 00:36:32.221 read: IOPS=4184, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1004msec) 00:36:32.221 slat (nsec): min=887, max=9022.0k, avg=116338.93, stdev=644197.92 00:36:32.221 clat (usec): min=2820, max=41262, avg=13911.13, stdev=6520.78 00:36:32.221 lat (usec): min=4285, max=41289, avg=14027.47, stdev=6576.77 00:36:32.221 clat percentiles (usec): 00:36:32.221 | 1.00th=[ 5997], 5.00th=[ 7242], 10.00th=[ 8225], 20.00th=[ 9503], 00:36:32.221 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11600], 60.00th=[13042], 00:36:32.221 | 70.00th=[14615], 80.00th=[16909], 90.00th=[22676], 95.00th=[29754], 00:36:32.221 | 99.00th=[38011], 99.50th=[38011], 99.90th=[39060], 99.95th=[40109], 00:36:32.221 | 99.99th=[41157] 00:36:32.221 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:36:32.221 slat (nsec): min=1492, max=11470k, avg=106431.95, stdev=709816.28 00:36:32.221 clat (usec): min=707, max=41641, avg=14952.80, stdev=6923.73 00:36:32.221 lat (usec): min=4254, max=41670, avg=15059.24, stdev=6993.42 00:36:32.221 clat percentiles (usec): 00:36:32.221 | 1.00th=[ 5866], 5.00th=[ 7177], 10.00th=[ 7701], 20.00th=[ 9241], 00:36:32.221 | 30.00th=[10028], 40.00th=[11338], 50.00th=[12649], 60.00th=[14353], 00:36:32.221 | 70.00th=[16909], 80.00th=[21365], 90.00th=[26084], 95.00th=[27919], 00:36:32.221 | 99.00th=[32900], 99.50th=[32900], 99.90th=[36963], 99.95th=[40633], 00:36:32.221 | 99.99th=[41681] 00:36:32.221 bw ( KiB/s): min=18080, max=18608, per=20.14%, avg=18344.00, stdev=373.35, samples=2 00:36:32.221 iops : min= 4520, max= 4652, avg=4586.00, stdev=93.34, samples=2 00:36:32.221 lat (usec) : 750=0.01% 00:36:32.221 lat (msec) : 4=0.01%, 10=28.24%, 20=52.83%, 50=18.90% 00:36:32.221 cpu : usr=2.29%, sys=4.29%, ctx=387, majf=0, minf=1 00:36:32.221 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:32.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:32.221 issued rwts: total=4201,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.221 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:32.221 job1: (groupid=0, jobs=1): err= 0: pid=3639621: Mon Oct 7 09:57:31 2024 00:36:32.221 read: IOPS=5159, BW=20.2MiB/s (21.1MB/s)(20.3MiB/1006msec) 00:36:32.221 slat (nsec): min=901, max=8391.6k, avg=89725.32, stdev=568559.86 00:36:32.221 clat (usec): min=3284, max=60972, avg=10204.11, stdev=6422.36 00:36:32.221 lat (usec): min=3291, max=60981, avg=10293.84, stdev=6479.25 00:36:32.221 clat percentiles (usec): 00:36:32.221 | 1.00th=[ 4228], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7242], 00:36:32.221 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9241], 00:36:32.221 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[13960], 95.00th=[20579], 00:36:32.221 | 99.00th=[47449], 99.50th=[59507], 99.90th=[60556], 99.95th=[61080], 00:36:32.221 | 99.99th=[61080] 00:36:32.221 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:36:32.221 slat (nsec): min=1516, max=8449.4k, avg=89086.86, stdev=418574.56 00:36:32.221 clat (usec): min=1558, max=60934, avg=13262.14, stdev=7478.66 00:36:32.221 lat (usec): min=1567, max=60936, avg=13351.22, stdev=7515.21 00:36:32.221 clat percentiles (usec): 00:36:32.221 | 1.00th=[ 3392], 5.00th=[ 5014], 10.00th=[ 5669], 20.00th=[ 6390], 00:36:32.221 | 30.00th=[ 7373], 40.00th=[ 8848], 50.00th=[11338], 60.00th=[15139], 00:36:32.221 | 70.00th=[16909], 80.00th=[19006], 90.00th=[23725], 95.00th=[25560], 00:36:32.221 | 99.00th=[34341], 99.50th=[43254], 99.90th=[52691], 99.95th=[52691], 00:36:32.221 | 99.99th=[61080] 00:36:32.221 bw ( KiB/s): min=19824, max=24768, per=24.48%, avg=22296.00, stdev=3495.94, samples=2 00:36:32.221 iops : min= 4956, max= 6192, avg=5574.00, stdev=873.98, samples=2 00:36:32.221 lat (msec) : 2=0.06%, 4=0.67%, 10=55.75%, 20=31.50%, 50=11.45% 00:36:32.221 lat (msec) : 100=0.58% 00:36:32.221 cpu : usr=2.89%, sys=5.17%, ctx=574, majf=0, minf=2 00:36:32.221 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:36:32.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:32.221 issued rwts: total=5190,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.221 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:32.221 job2: (groupid=0, jobs=1): err= 0: pid=3639623: Mon Oct 7 09:57:31 2024 00:36:32.221 read: IOPS=5733, BW=22.4MiB/s (23.5MB/s)(22.6MiB/1007msec) 00:36:32.221 slat (nsec): min=925, max=10256k, avg=89145.32, stdev=586992.02 00:36:32.221 clat (usec): min=1784, max=33662, avg=11484.92, stdev=6166.70 00:36:32.221 lat (usec): min=2809, max=33666, avg=11574.06, stdev=6217.23 00:36:32.221 clat percentiles (usec): 00:36:32.221 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 6718], 20.00th=[ 7504], 00:36:32.221 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9765], 00:36:32.221 | 70.00th=[10945], 80.00th=[15533], 90.00th=[21103], 95.00th=[26084], 00:36:32.221 | 99.00th=[31589], 99.50th=[32113], 99.90th=[33817], 99.95th=[33817], 00:36:32.221 | 99.99th=[33817] 00:36:32.221 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:36:32.221 slat (nsec): min=1545, max=13258k, avg=73898.57, stdev=508803.56 00:36:32.221 clat (usec): min=1106, max=38556, avg=9975.77, stdev=6145.53 00:36:32.221 lat (usec): min=1116, max=38563, avg=10049.67, stdev=6183.41 00:36:32.221 clat percentiles (usec): 00:36:32.221 | 1.00th=[ 3818], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6718], 00:36:32.221 | 30.00th=[ 7177], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8848], 00:36:32.221 | 70.00th=[10028], 80.00th=[11338], 90.00th=[14091], 95.00th=[25035], 00:36:32.221 | 99.00th=[36963], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:36:32.221 | 99.99th=[38536] 00:36:32.221 bw ( KiB/s): min=20480, max=28672, per=26.98%, avg=24576.00, stdev=5792.62, samples=2 00:36:32.221 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:36:32.221 lat (msec) : 2=0.27%, 4=0.39%, 10=65.35%, 20=24.02%, 50=9.97% 00:36:32.221 cpu : usr=4.08%, sys=6.06%, ctx=379, majf=0, minf=2 00:36:32.221 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:36:32.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:32.221 issued rwts: total=5774,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.221 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:32.221 job3: (groupid=0, jobs=1): err= 0: pid=3639624: Mon Oct 7 09:57:31 2024 00:36:32.221 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:36:32.221 slat (nsec): min=908, max=12929k, avg=74140.84, stdev=395636.72 00:36:32.221 clat (usec): min=4251, max=95519, avg=10144.43, stdev=4991.94 00:36:32.221 lat (usec): min=4252, max=95521, avg=10218.57, stdev=4998.48 00:36:32.221 clat percentiles (usec): 00:36:32.221 | 1.00th=[ 5211], 5.00th=[ 6849], 10.00th=[ 7701], 20.00th=[ 8356], 00:36:32.221 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:36:32.221 | 70.00th=[10028], 80.00th=[10814], 90.00th=[12256], 95.00th=[15664], 00:36:32.221 | 99.00th=[30802], 99.50th=[36439], 99.90th=[95945], 99.95th=[95945], 00:36:32.221 | 99.99th=[95945] 00:36:32.221 write: IOPS=6526, BW=25.5MiB/s (26.7MB/s)(25.6MiB/1003msec); 0 zone resets 00:36:32.221 slat (nsec): min=1500, max=8171.5k, avg=80546.01, stdev=419468.97 00:36:32.221 clat (usec): min=1647, max=68474, avg=9893.99, stdev=7590.25 00:36:32.221 lat (usec): min=2467, max=68482, avg=9974.53, stdev=7643.24 00:36:32.221 clat percentiles (usec): 00:36:32.221 | 1.00th=[ 4883], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7439], 00:36:32.221 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8455], 00:36:32.221 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[12518], 95.00th=[14353], 00:36:32.221 | 99.00th=[56361], 99.50th=[63177], 99.90th=[68682], 99.95th=[68682], 00:36:32.221 | 99.99th=[68682] 00:36:32.221 bw ( KiB/s): min=22680, max=28672, per=28.19%, avg=25676.00, stdev=4236.98, samples=2 00:36:32.221 iops : min= 5670, max= 7168, avg=6419.00, stdev=1059.25, samples=2 00:36:32.221 lat (msec) : 2=0.01%, 4=0.09%, 10=74.39%, 20=23.08%, 50=1.50% 00:36:32.221 lat (msec) : 100=0.93% 00:36:32.221 cpu : usr=1.60%, sys=3.99%, ctx=764, majf=0, minf=1 00:36:32.221 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:36:32.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:32.221 issued rwts: total=6144,6546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.221 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:32.221 00:36:32.221 Run status group 0 (all jobs): 00:36:32.221 READ: bw=82.7MiB/s (86.7MB/s), 16.3MiB/s-23.9MiB/s (17.1MB/s-25.1MB/s), io=83.2MiB (87.3MB), run=1003-1007msec 00:36:32.221 WRITE: bw=88.9MiB/s (93.3MB/s), 17.9MiB/s-25.5MiB/s (18.8MB/s-26.7MB/s), io=89.6MiB (93.9MB), run=1003-1007msec 00:36:32.221 00:36:32.221 Disk stats (read/write): 00:36:32.221 nvme0n1: ios=3349/3584, merge=0/0, ticks=17157/17611, in_queue=34768, util=90.28% 00:36:32.221 nvme0n2: ios=3926/4096, merge=0/0, ticks=40855/61933, in_queue=102788, util=87.93% 00:36:32.221 nvme0n3: ios=5120/5209, merge=0/0, ticks=29714/25270, in_queue=54984, util=88.25% 00:36:32.221 nvme0n4: ios=5669/5735, merge=0/0, ticks=16268/14672, in_queue=30940, util=91.21% 00:36:32.221 09:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:32.221 09:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3639948 00:36:32.221 09:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:32.222 09:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:32.222 [global] 00:36:32.222 thread=1 00:36:32.222 invalidate=1 00:36:32.222 rw=read 00:36:32.222 time_based=1 00:36:32.222 runtime=10 00:36:32.222 ioengine=libaio 00:36:32.222 direct=1 00:36:32.222 bs=4096 00:36:32.222 iodepth=1 00:36:32.222 norandommap=1 00:36:32.222 numjobs=1 00:36:32.222 00:36:32.222 [job0] 00:36:32.222 filename=/dev/nvme0n1 00:36:32.222 [job1] 00:36:32.222 filename=/dev/nvme0n2 00:36:32.222 [job2] 00:36:32.222 filename=/dev/nvme0n3 00:36:32.222 [job3] 00:36:32.222 filename=/dev/nvme0n4 00:36:32.222 Could not set queue depth (nvme0n1) 00:36:32.222 Could not set queue depth (nvme0n2) 00:36:32.222 Could not set queue depth (nvme0n3) 00:36:32.222 Could not set queue depth (nvme0n4) 00:36:32.483 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:32.483 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:32.483 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:32.483 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:32.483 fio-3.35 00:36:32.483 Starting 4 threads 00:36:35.185 09:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:35.185 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=266240, buflen=4096 00:36:35.185 fio: pid=3640148, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:35.185 09:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:35.446 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=17297408, buflen=4096 00:36:35.446 fio: pid=3640147, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:35.446 09:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:35.446 09:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:35.708 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:35.708 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:35.708 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=290816, buflen=4096 00:36:35.708 fio: pid=3640145, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:35.708 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=11620352, buflen=4096 00:36:35.708 fio: pid=3640146, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:36:35.708 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:35.708 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:35.708 00:36:35.708 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3640145: Mon Oct 7 09:57:35 2024 00:36:35.708 read: IOPS=24, BW=95.3KiB/s (97.6kB/s)(284KiB/2980msec) 00:36:35.708 slat (usec): min=24, max=29795, avg=441.44, stdev=3508.20 00:36:35.708 clat (usec): min=771, max=43047, avg=41204.54, stdev=4888.81 00:36:35.708 lat (usec): min=832, max=71991, avg=41651.84, stdev=6098.07 00:36:35.708 clat percentiles (usec): 00:36:35.708 | 1.00th=[ 775], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:35.708 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:36:35.708 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:35.708 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:36:35.708 | 99.99th=[43254] 00:36:35.708 bw ( KiB/s): min= 88, max= 104, per=1.05%, avg=96.00, stdev= 5.66, samples=5 00:36:35.708 iops : min= 22, max= 26, avg=24.00, stdev= 1.41, samples=5 00:36:35.708 lat (usec) : 1000=1.39% 00:36:35.708 lat (msec) : 50=97.22% 00:36:35.708 cpu : usr=0.10%, sys=0.00%, ctx=75, majf=0, minf=1 00:36:35.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:35.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.708 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.708 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:35.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:35.708 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3640146: Mon Oct 7 09:57:35 2024 00:36:35.708 read: IOPS=900, BW=3601KiB/s (3688kB/s)(11.1MiB/3151msec) 00:36:35.708 slat (usec): min=6, max=14948, avg=44.82, stdev=475.86 00:36:35.708 clat (usec): min=550, max=40990, avg=1059.30, stdev=1676.34 00:36:35.708 lat (usec): min=576, max=41118, avg=1101.72, stdev=1738.55 00:36:35.708 clat percentiles (usec): 00:36:35.708 | 1.00th=[ 750], 5.00th=[ 807], 10.00th=[ 873], 20.00th=[ 930], 00:36:35.709 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:36:35.709 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:36:35.709 | 99.00th=[ 1156], 99.50th=[ 1205], 99.90th=[41157], 99.95th=[41157], 00:36:35.709 | 99.99th=[41157] 00:36:35.709 bw ( KiB/s): min= 2636, max= 3976, per=40.57%, avg=3706.00, stdev=525.29, samples=6 00:36:35.709 iops : min= 659, max= 994, avg=926.50, stdev=131.32, samples=6 00:36:35.709 lat (usec) : 750=1.06%, 1000=45.95% 00:36:35.709 lat (msec) : 2=52.78%, 50=0.18% 00:36:35.709 cpu : usr=0.92%, sys=3.02%, ctx=2845, majf=0, minf=2 00:36:35.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:35.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.709 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.709 issued rwts: total=2838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:35.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:35.709 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3640147: Mon Oct 7 09:57:35 2024 00:36:35.709 read: IOPS=1518, BW=6072KiB/s (6218kB/s)(16.5MiB/2782msec) 00:36:35.709 slat (usec): min=2, max=12201, avg=16.09, stdev=205.38 00:36:35.709 clat (usec): min=175, max=1012, avg=635.10, stdev=113.28 00:36:35.709 lat (usec): min=180, max=13023, avg=651.19, stdev=240.44 00:36:35.709 clat percentiles (usec): 00:36:35.709 | 1.00th=[ 293], 5.00th=[ 445], 10.00th=[ 519], 20.00th=[ 570], 00:36:35.709 | 30.00th=[ 603], 40.00th=[ 619], 50.00th=[ 635], 60.00th=[ 644], 00:36:35.709 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 799], 95.00th=[ 840], 00:36:35.709 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 971], 99.95th=[ 988], 00:36:35.709 | 99.99th=[ 1012] 00:36:35.709 bw ( KiB/s): min= 5208, max= 6696, per=68.53%, avg=6260.80, stdev=636.20, samples=5 00:36:35.709 iops : min= 1302, max= 1674, avg=1565.20, stdev=159.05, samples=5 00:36:35.709 lat (usec) : 250=0.26%, 500=7.27%, 750=77.56%, 1000=14.87% 00:36:35.709 lat (msec) : 2=0.02% 00:36:35.709 cpu : usr=0.76%, sys=3.06%, ctx=4226, majf=0, minf=2 00:36:35.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:35.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.709 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.709 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:35.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:35.709 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3640148: Mon Oct 7 09:57:35 2024 00:36:35.709 read: IOPS=25, BW=99.5KiB/s (102kB/s)(260KiB/2612msec) 00:36:35.709 slat (nsec): min=25078, max=35380, avg=25813.32, stdev=1434.54 00:36:35.709 clat (usec): min=632, max=42156, avg=39810.00, stdev=8626.60 00:36:35.709 lat (usec): min=662, max=42181, avg=39835.82, stdev=8625.38 00:36:35.709 clat percentiles (usec): 00:36:35.709 | 1.00th=[ 635], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:35.709 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:36:35.709 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:35.709 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:35.709 | 99.99th=[42206] 00:36:35.709 bw ( KiB/s): min= 88, max= 112, per=1.08%, avg=99.20, stdev= 9.12, samples=5 00:36:35.709 iops : min= 22, max= 28, avg=24.80, stdev= 2.28, samples=5 00:36:35.709 lat (usec) : 750=1.52%, 1000=1.52% 00:36:35.709 lat (msec) : 2=1.52%, 50=93.94% 00:36:35.709 cpu : usr=0.11%, sys=0.00%, ctx=66, majf=0, minf=2 00:36:35.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:35.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.709 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.709 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:35.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:35.709 00:36:35.709 Run status group 0 (all jobs): 00:36:35.709 READ: bw=9135KiB/s (9354kB/s), 95.3KiB/s-6072KiB/s (97.6kB/s-6218kB/s), io=28.1MiB (29.5MB), run=2612-3151msec 00:36:35.709 00:36:35.709 Disk stats (read/write): 00:36:35.709 nvme0n1: ios=68/0, merge=0/0, ticks=2803/0, in_queue=2803, util=93.82% 00:36:35.709 nvme0n2: ios=2835/0, merge=0/0, ticks=2891/0, in_queue=2891, util=94.30% 00:36:35.709 nvme0n3: ios=4022/0, merge=0/0, ticks=2442/0, in_queue=2442, util=96.03% 00:36:35.709 nvme0n4: ios=64/0, merge=0/0, ticks=2549/0, in_queue=2549, util=96.42% 00:36:35.970 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:35.970 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:36.230 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:36.230 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:36.230 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:36.230 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:36.490 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:36.490 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3639948 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:36.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # local i=0 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1234 -- # return 0 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:36.750 nvmf hotplug test: fio failed as expected 00:36:36.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:37.010 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:37.010 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:37.010 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:37.010 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:37.011 rmmod nvme_tcp 00:36:37.011 rmmod nvme_fabrics 00:36:37.011 rmmod nvme_keyring 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3636538 ']' 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3636538 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' -z 3636538 ']' 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # kill -0 3636538 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # uname 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:36:37.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3636538 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3636538' 00:36:37.272 killing process with pid 3636538 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # kill 3636538 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@977 -- # wait 3636538 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:37.272 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:39.818 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:39.818 00:36:39.818 real 0m28.560s 00:36:39.818 user 2m18.721s 00:36:39.818 sys 0m12.226s 00:36:39.818 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # xtrace_disable 00:36:39.818 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:39.818 ************************************ 00:36:39.818 END TEST nvmf_fio_target 00:36:39.818 ************************************ 00:36:39.818 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:39.818 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:36:39.818 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1110 -- # xtrace_disable 00:36:39.818 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:39.818 ************************************ 00:36:39.818 START TEST nvmf_bdevio 00:36:39.818 ************************************ 00:36:39.818 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:39.818 * Looking for test storage... 00:36:39.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1626 -- # lcov --version 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:36:39.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.818 --rc genhtml_branch_coverage=1 00:36:39.818 --rc genhtml_function_coverage=1 00:36:39.818 --rc genhtml_legend=1 00:36:39.818 --rc geninfo_all_blocks=1 00:36:39.818 --rc geninfo_unexecuted_blocks=1 00:36:39.818 00:36:39.818 ' 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:36:39.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.818 --rc genhtml_branch_coverage=1 00:36:39.818 --rc genhtml_function_coverage=1 00:36:39.818 --rc genhtml_legend=1 00:36:39.818 --rc geninfo_all_blocks=1 00:36:39.818 --rc geninfo_unexecuted_blocks=1 00:36:39.818 00:36:39.818 ' 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:36:39.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.818 --rc genhtml_branch_coverage=1 00:36:39.818 --rc genhtml_function_coverage=1 00:36:39.818 --rc genhtml_legend=1 00:36:39.818 --rc geninfo_all_blocks=1 00:36:39.818 --rc geninfo_unexecuted_blocks=1 00:36:39.818 00:36:39.818 ' 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:36:39.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.818 --rc genhtml_branch_coverage=1 00:36:39.818 --rc genhtml_function_coverage=1 00:36:39.818 --rc genhtml_legend=1 00:36:39.818 --rc geninfo_all_blocks=1 00:36:39.818 --rc geninfo_unexecuted_blocks=1 00:36:39.818 00:36:39.818 ' 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.818 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:39.819 09:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:48.002 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:48.003 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:48.003 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:48.003 Found net devices under 0000:31:00.0: cvl_0_0 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:48.003 Found net devices under 0000:31:00.1: cvl_0_1 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:48.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:48.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:36:48.003 00:36:48.003 --- 10.0.0.2 ping statistics --- 00:36:48.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.003 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:48.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:48.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:36:48.003 00:36:48.003 --- 10.0.0.1 ping statistics --- 00:36:48.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.003 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3645239 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3645239 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@834 -- # '[' -z 3645239 ']' 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local max_retries=100 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@843 -- # xtrace_disable 00:36:48.003 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.003 [2024-10-07 09:57:47.002843] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:48.003 [2024-10-07 09:57:47.004033] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:36:48.003 [2024-10-07 09:57:47.004084] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.003 [2024-10-07 09:57:47.094147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:48.004 [2024-10-07 09:57:47.183359] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:48.004 [2024-10-07 09:57:47.183413] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:48.004 [2024-10-07 09:57:47.183422] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:48.004 [2024-10-07 09:57:47.183429] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:48.004 [2024-10-07 09:57:47.183436] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:48.004 [2024-10-07 09:57:47.185689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:36:48.004 [2024-10-07 09:57:47.185894] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:36:48.004 [2024-10-07 09:57:47.186025] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:36:48.004 [2024-10-07 09:57:47.186026] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:36:48.004 [2024-10-07 09:57:47.281889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:48.004 [2024-10-07 09:57:47.282652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:48.004 [2024-10-07 09:57:47.283038] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:48.004 [2024-10-07 09:57:47.283748] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:48.004 [2024-10-07 09:57:47.283786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@867 -- # return 0 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@733 -- # xtrace_disable 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.265 [2024-10-07 09:57:47.855000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.265 Malloc0 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:48.265 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.526 [2024-10-07 09:57:47.939173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:48.526 { 00:36:48.526 "params": { 00:36:48.526 "name": "Nvme$subsystem", 00:36:48.526 "trtype": "$TEST_TRANSPORT", 00:36:48.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:48.526 "adrfam": "ipv4", 00:36:48.526 "trsvcid": "$NVMF_PORT", 00:36:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:48.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:48.526 "hdgst": ${hdgst:-false}, 00:36:48.526 "ddgst": ${ddgst:-false} 00:36:48.526 }, 00:36:48.526 "method": "bdev_nvme_attach_controller" 00:36:48.526 } 00:36:48.526 EOF 00:36:48.526 )") 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:36:48.526 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:48.526 "params": { 00:36:48.526 "name": "Nvme1", 00:36:48.526 "trtype": "tcp", 00:36:48.526 "traddr": "10.0.0.2", 00:36:48.526 "adrfam": "ipv4", 00:36:48.526 "trsvcid": "4420", 00:36:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:48.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:48.526 "hdgst": false, 00:36:48.526 "ddgst": false 00:36:48.526 }, 00:36:48.526 "method": "bdev_nvme_attach_controller" 00:36:48.526 }' 00:36:48.526 [2024-10-07 09:57:48.006270] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:36:48.526 [2024-10-07 09:57:48.006333] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645493 ] 00:36:48.526 [2024-10-07 09:57:48.088144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:48.526 [2024-10-07 09:57:48.187544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.526 [2024-10-07 09:57:48.187687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.526 [2024-10-07 09:57:48.187688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:48.787 I/O targets: 00:36:48.787 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:48.787 00:36:48.787 00:36:48.787 CUnit - A unit testing framework for C - Version 2.1-3 00:36:48.787 http://cunit.sourceforge.net/ 00:36:48.787 00:36:48.787 00:36:48.787 Suite: bdevio tests on: Nvme1n1 00:36:48.787 Test: blockdev write read block ...passed 00:36:48.787 Test: blockdev write zeroes read block ...passed 00:36:48.787 Test: blockdev write zeroes read no split ...passed 00:36:49.048 Test: blockdev write zeroes read split ...passed 00:36:49.048 Test: blockdev write zeroes read split partial ...passed 00:36:49.048 Test: blockdev reset ...[2024-10-07 09:57:48.473190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:49.048 [2024-10-07 09:57:48.473284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145a1c0 (9): Bad file descriptor 00:36:49.048 [2024-10-07 09:57:48.526736] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:49.048 passed 00:36:49.048 Test: blockdev write read 8 blocks ...passed 00:36:49.048 Test: blockdev write read size > 128k ...passed 00:36:49.048 Test: blockdev write read invalid size ...passed 00:36:49.048 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:49.048 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:49.048 Test: blockdev write read max offset ...passed 00:36:49.048 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:49.308 Test: blockdev writev readv 8 blocks ...passed 00:36:49.308 Test: blockdev writev readv 30 x 1block ...passed 00:36:49.308 Test: blockdev writev readv block ...passed 00:36:49.308 Test: blockdev writev readv size > 128k ...passed 00:36:49.308 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:49.308 Test: blockdev comparev and writev ...[2024-10-07 09:57:48.831714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:49.309 [2024-10-07 09:57:48.831762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.309 [2024-10-07 09:57:48.831778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:49.309 [2024-10-07 09:57:48.831788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.309 [2024-10-07 09:57:48.832321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:49.309 [2024-10-07 09:57:48.832333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:49.309 [2024-10-07 09:57:48.832347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:49.309 [2024-10-07 09:57:48.832356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:49.309 [2024-10-07 09:57:48.832886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:49.309 [2024-10-07 09:57:48.832898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:49.309 [2024-10-07 09:57:48.832912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:49.309 [2024-10-07 09:57:48.832921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:49.309 [2024-10-07 09:57:48.833452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:49.309 [2024-10-07 09:57:48.833463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:49.309 [2024-10-07 09:57:48.833477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:49.309 [2024-10-07 09:57:48.833486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:49.309 passed 00:36:49.309 Test: blockdev nvme passthru rw ...passed 00:36:49.309 Test: blockdev nvme passthru vendor specific ...[2024-10-07 09:57:48.918290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:49.309 [2024-10-07 09:57:48.918307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:49.309 [2024-10-07 09:57:48.918544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:49.309 [2024-10-07 09:57:48.918555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:49.309 [2024-10-07 09:57:48.918830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:49.309 [2024-10-07 09:57:48.918841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:49.309 [2024-10-07 09:57:48.919080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:49.309 [2024-10-07 09:57:48.919091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:49.309 passed 00:36:49.309 Test: blockdev nvme admin passthru ...passed 00:36:49.570 Test: blockdev copy ...passed 00:36:49.570 00:36:49.570 Run Summary: Type Total Ran Passed Failed Inactive 00:36:49.570 suites 1 1 n/a 0 0 00:36:49.570 tests 23 23 23 0 0 00:36:49.570 asserts 152 152 152 0 n/a 00:36:49.570 00:36:49.570 Elapsed time = 1.265 seconds 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@564 -- # xtrace_disable 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:49.570 rmmod nvme_tcp 00:36:49.570 rmmod nvme_fabrics 00:36:49.570 rmmod nvme_keyring 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3645239 ']' 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3645239 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' -z 3645239 ']' 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # kill -0 3645239 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # uname 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:36:49.570 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3645239 00:36:49.831 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # process_name=reactor_3 00:36:49.832 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@963 -- # '[' reactor_3 = sudo ']' 00:36:49.832 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3645239' 00:36:49.832 killing process with pid 3645239 00:36:49.832 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # kill 3645239 00:36:49.832 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@977 -- # wait 3645239 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.093 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.008 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:52.008 00:36:52.008 real 0m12.623s 00:36:52.008 user 0m10.238s 00:36:52.008 sys 0m6.572s 00:36:52.008 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # xtrace_disable 00:36:52.008 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:52.008 ************************************ 00:36:52.008 END TEST nvmf_bdevio 00:36:52.008 ************************************ 00:36:52.008 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:52.008 00:36:52.008 real 5m6.468s 00:36:52.008 user 10m19.959s 00:36:52.008 sys 2m7.824s 00:36:52.008 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # xtrace_disable 00:36:52.008 09:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:52.008 ************************************ 00:36:52.008 END TEST nvmf_target_core_interrupt_mode 00:36:52.008 ************************************ 00:36:52.270 09:57:51 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:52.270 09:57:51 nvmf_tcp -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:36:52.270 09:57:51 nvmf_tcp -- common/autotest_common.sh@1110 -- # xtrace_disable 00:36:52.270 09:57:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:52.270 ************************************ 00:36:52.270 START TEST nvmf_interrupt 00:36:52.270 ************************************ 00:36:52.270 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:52.270 * Looking for test storage... 00:36:52.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:52.270 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:36:52.270 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1626 -- # lcov --version 00:36:52.270 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:52.532 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:36:52.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.533 --rc genhtml_branch_coverage=1 00:36:52.533 --rc genhtml_function_coverage=1 00:36:52.533 --rc genhtml_legend=1 00:36:52.533 --rc geninfo_all_blocks=1 00:36:52.533 --rc geninfo_unexecuted_blocks=1 00:36:52.533 00:36:52.533 ' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:36:52.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.533 --rc genhtml_branch_coverage=1 00:36:52.533 --rc genhtml_function_coverage=1 00:36:52.533 --rc genhtml_legend=1 00:36:52.533 --rc geninfo_all_blocks=1 00:36:52.533 --rc geninfo_unexecuted_blocks=1 00:36:52.533 00:36:52.533 ' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:36:52.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.533 --rc genhtml_branch_coverage=1 00:36:52.533 --rc genhtml_function_coverage=1 00:36:52.533 --rc genhtml_legend=1 00:36:52.533 --rc geninfo_all_blocks=1 00:36:52.533 --rc geninfo_unexecuted_blocks=1 00:36:52.533 00:36:52.533 ' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:36:52.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.533 --rc genhtml_branch_coverage=1 00:36:52.533 --rc genhtml_function_coverage=1 00:36:52.533 --rc genhtml_legend=1 00:36:52.533 --rc geninfo_all_blocks=1 00:36:52.533 --rc geninfo_unexecuted_blocks=1 00:36:52.533 00:36:52.533 ' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:52.533 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:52.534 09:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.534 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:52.534 09:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.534 09:57:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:52.534 09:57:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:52.534 09:57:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:52.534 09:57:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:00.683 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:00.684 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:00.684 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:00.684 Found net devices under 0000:31:00.0: cvl_0_0 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:00.684 Found net devices under 0000:31:00.1: cvl_0_1 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:00.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:00.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:37:00.684 00:37:00.684 --- 10.0.0.2 ping statistics --- 00:37:00.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:00.684 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:00.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:00.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:37:00.684 00:37:00.684 --- 10.0.0.1 ping statistics --- 00:37:00.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:00.684 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=3650016 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 3650016 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@834 -- # '[' -z 3650016 ']' 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local max_retries=100 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:00.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@843 -- # xtrace_disable 00:37:00.684 09:57:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:00.684 [2024-10-07 09:57:59.786291] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:00.684 [2024-10-07 09:57:59.787426] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:37:00.684 [2024-10-07 09:57:59.787478] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.684 [2024-10-07 09:57:59.876685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:00.684 [2024-10-07 09:57:59.970697] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:00.684 [2024-10-07 09:57:59.970755] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:00.684 [2024-10-07 09:57:59.970764] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:00.684 [2024-10-07 09:57:59.970771] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:00.684 [2024-10-07 09:57:59.970777] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:00.684 [2024-10-07 09:57:59.971892] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.684 [2024-10-07 09:57:59.971998] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.684 [2024-10-07 09:58:00.050646] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:00.684 [2024-10-07 09:58:00.051488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:00.684 [2024-10-07 09:58:00.051709] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:00.946 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:37:00.946 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@867 -- # return 0 00:37:00.946 09:58:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:00.946 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@733 -- # xtrace_disable 00:37:00.946 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:01.208 09:58:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:01.208 09:58:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:01.208 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:01.208 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:01.208 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:01.208 5000+0 records in 00:37:01.208 5000+0 records out 00:37:01.209 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0182917 s, 560 MB/s 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:01.209 AIO0 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:01.209 [2024-10-07 09:58:00.740986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:01.209 [2024-10-07 09:58:00.797666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3650016 0 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3650016 0 idle 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3650016 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3650016 -w 256 00:37:01.209 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3650016 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.34 reactor_0' 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3650016 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.34 reactor_0 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3650016 1 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3650016 1 idle 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3650016 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3650016 -w 256 00:37:01.471 09:58:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3650020 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3650020 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3650385 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3650016 0 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3650016 0 busy 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3650016 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3650016 -w 256 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3650016 root 20 0 128.2g 44928 32256 R 66.7 0.0 0:00.45 reactor_0' 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3650016 root 20 0 128.2g 44928 32256 R 66.7 0.0 0:00.45 reactor_0 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3650016 1 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3650016 1 busy 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3650016 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3650016 -w 256 00:37:01.733 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3650020 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.25 reactor_1' 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3650020 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.25 reactor_1 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:01.995 09:58:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3650385 00:37:12.002 Initializing NVMe Controllers 00:37:12.002 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:12.002 Controller IO queue size 256, less than required. 00:37:12.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:12.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:12.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:12.002 Initialization complete. Launching workers. 00:37:12.002 ======================================================== 00:37:12.002 Latency(us) 00:37:12.002 Device Information : IOPS MiB/s Average min max 00:37:12.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18268.70 71.36 14017.93 4487.41 32369.63 00:37:12.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19136.90 74.75 13379.04 8066.80 28990.72 00:37:12.003 ======================================================== 00:37:12.003 Total : 37405.60 146.12 13691.07 4487.41 32369.63 00:37:12.003 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3650016 0 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3650016 0 idle 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3650016 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3650016 -w 256 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3650016 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.32 reactor_0' 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3650016 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.32 reactor_0 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3650016 1 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3650016 1 idle 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3650016 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3650016 -w 256 00:37:12.003 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:12.265 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3650020 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:37:12.265 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3650020 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:37:12.266 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:12.266 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:12.266 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:12.266 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:12.266 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:12.266 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:12.266 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:12.266 09:58:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:12.266 09:58:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:12.838 09:58:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:12.838 09:58:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local i=0 00:37:12.838 09:58:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local nvme_device_counter=1 nvme_devices=0 00:37:12.838 09:58:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # [[ -n '' ]] 00:37:12.838 09:58:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # sleep 2 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # (( i++ <= 15 )) 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # lsblk -l -o NAME,SERIAL 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # grep -c SPDKISFASTANDAWESOME 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # nvme_devices=1 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # (( nvme_devices == nvme_device_counter )) 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # return 0 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3650016 0 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3650016 0 idle 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3650016 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:14.755 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:15.016 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3650016 -w 256 00:37:15.016 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:15.016 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3650016 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.71 reactor_0' 00:37:15.016 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3650016 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.71 reactor_0 00:37:15.016 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:15.016 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:15.016 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3650016 1 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3650016 1 idle 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3650016 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3650016 -w 256 00:37:15.017 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3650020 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3650020 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:15.278 09:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:15.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # local i=0 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # lsblk -o NAME,SERIAL 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME,SERIAL 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1230 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1234 -- # return 0 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:15.540 09:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:15.540 rmmod nvme_tcp 00:37:15.540 rmmod nvme_fabrics 00:37:15.540 rmmod nvme_keyring 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 3650016 ']' 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 3650016 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@953 -- # '[' -z 3650016 ']' 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # kill -0 3650016 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # uname 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3650016 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3650016' 00:37:15.540 killing process with pid 3650016 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # kill 3650016 00:37:15.540 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@977 -- # wait 3650016 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:15.802 09:58:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.719 09:58:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:17.980 00:37:17.980 real 0m25.658s 00:37:17.980 user 0m40.349s 00:37:17.980 sys 0m10.013s 00:37:17.980 09:58:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # xtrace_disable 00:37:17.980 09:58:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:17.980 ************************************ 00:37:17.980 END TEST nvmf_interrupt 00:37:17.980 ************************************ 00:37:17.980 00:37:17.980 real 30m14.146s 00:37:17.980 user 61m19.686s 00:37:17.980 sys 10m23.411s 00:37:17.980 09:58:17 nvmf_tcp -- common/autotest_common.sh@1129 -- # xtrace_disable 00:37:17.980 09:58:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:17.980 ************************************ 00:37:17.980 END TEST nvmf_tcp 00:37:17.980 ************************************ 00:37:17.980 09:58:17 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:37:17.980 09:58:17 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:17.980 09:58:17 -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:37:17.980 09:58:17 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:37:17.980 09:58:17 -- common/autotest_common.sh@10 -- # set +x 00:37:17.980 ************************************ 00:37:17.980 START TEST spdkcli_nvmf_tcp 00:37:17.980 ************************************ 00:37:17.980 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:17.981 * Looking for test storage... 00:37:17.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1626 -- # lcov --version 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:37:18.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.243 --rc genhtml_branch_coverage=1 00:37:18.243 --rc genhtml_function_coverage=1 00:37:18.243 --rc genhtml_legend=1 00:37:18.243 --rc geninfo_all_blocks=1 00:37:18.243 --rc geninfo_unexecuted_blocks=1 00:37:18.243 00:37:18.243 ' 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:37:18.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.243 --rc genhtml_branch_coverage=1 00:37:18.243 --rc genhtml_function_coverage=1 00:37:18.243 --rc genhtml_legend=1 00:37:18.243 --rc geninfo_all_blocks=1 00:37:18.243 --rc geninfo_unexecuted_blocks=1 00:37:18.243 00:37:18.243 ' 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:37:18.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.243 --rc genhtml_branch_coverage=1 00:37:18.243 --rc genhtml_function_coverage=1 00:37:18.243 --rc genhtml_legend=1 00:37:18.243 --rc geninfo_all_blocks=1 00:37:18.243 --rc geninfo_unexecuted_blocks=1 00:37:18.243 00:37:18.243 ' 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:37:18.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.243 --rc genhtml_branch_coverage=1 00:37:18.243 --rc genhtml_function_coverage=1 00:37:18.243 --rc genhtml_legend=1 00:37:18.243 --rc geninfo_all_blocks=1 00:37:18.243 --rc geninfo_unexecuted_blocks=1 00:37:18.243 00:37:18.243 ' 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:18.243 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:18.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3653566 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3653566 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # '[' -z 3653566 ']' 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local max_retries=100 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@843 -- # xtrace_disable 00:37:18.244 09:58:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:18.244 [2024-10-07 09:58:17.836136] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:37:18.244 [2024-10-07 09:58:17.836202] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653566 ] 00:37:18.506 [2024-10-07 09:58:17.916016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:18.506 [2024-10-07 09:58:18.012548] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.506 [2024-10-07 09:58:18.012553] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@867 -- # return 0 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@733 -- # xtrace_disable 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:19.080 09:58:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:19.080 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:19.080 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:19.080 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:19.080 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:19.080 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:19.080 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:19.080 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:19.080 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:19.080 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:19.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:19.080 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:19.080 ' 00:37:22.386 [2024-10-07 09:58:21.445118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.329 [2024-10-07 09:58:22.809257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:25.877 [2024-10-07 09:58:25.336364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:28.424 [2024-10-07 09:58:27.562676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:29.809 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:29.809 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:29.809 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:29.809 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:29.809 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:29.809 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:29.809 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:29.809 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:29.809 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:29.809 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:29.809 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:29.809 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:29.809 09:58:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:29.809 09:58:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@733 -- # xtrace_disable 00:37:29.809 09:58:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:29.809 09:58:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:29.809 09:58:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:29.809 09:58:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:29.809 09:58:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:29.809 09:58:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:30.381 09:58:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:30.381 09:58:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:30.381 09:58:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:30.381 09:58:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@733 -- # xtrace_disable 00:37:30.381 09:58:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:30.381 09:58:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:30.381 09:58:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:30.381 09:58:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:30.381 09:58:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:30.381 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:30.381 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:30.381 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:30.381 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:30.381 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:30.381 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:30.381 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:30.381 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:30.381 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:30.381 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:30.381 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:30.381 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:30.381 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:30.381 ' 00:37:36.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:36.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:36.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:36.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:36.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:36.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:36.973 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:36.973 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:36.973 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:36.973 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:36.973 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:36.973 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:36.973 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:36.973 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@733 -- # xtrace_disable 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3653566 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' -z 3653566 ']' 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # kill -0 3653566 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # uname 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3653566 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3653566' 00:37:36.973 killing process with pid 3653566 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # kill 3653566 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # wait 3653566 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3653566 ']' 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3653566 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' -z 3653566 ']' 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # kill -0 3653566 00:37:36.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 957: kill: (3653566) - No such process 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@980 -- # echo 'Process with pid 3653566 is not found' 00:37:36.973 Process with pid 3653566 is not found 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:36.973 00:37:36.973 real 0m18.243s 00:37:36.973 user 0m40.399s 00:37:36.973 sys 0m0.946s 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # xtrace_disable 00:37:36.973 09:58:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:36.973 ************************************ 00:37:36.973 END TEST spdkcli_nvmf_tcp 00:37:36.973 ************************************ 00:37:36.973 09:58:35 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:36.973 09:58:35 -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:37:36.973 09:58:35 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:37:36.973 09:58:35 -- common/autotest_common.sh@10 -- # set +x 00:37:36.973 ************************************ 00:37:36.973 START TEST nvmf_identify_passthru 00:37:36.973 ************************************ 00:37:36.973 09:58:35 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:36.973 * Looking for test storage... 00:37:36.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:36.973 09:58:35 nvmf_identify_passthru -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:37:36.973 09:58:35 nvmf_identify_passthru -- common/autotest_common.sh@1626 -- # lcov --version 00:37:36.973 09:58:35 nvmf_identify_passthru -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:37:36.973 09:58:36 nvmf_identify_passthru -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:36.973 09:58:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:36.973 09:58:36 nvmf_identify_passthru -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:36.973 09:58:36 nvmf_identify_passthru -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:37:36.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.973 --rc genhtml_branch_coverage=1 00:37:36.973 --rc genhtml_function_coverage=1 00:37:36.973 --rc genhtml_legend=1 00:37:36.973 --rc geninfo_all_blocks=1 00:37:36.974 --rc geninfo_unexecuted_blocks=1 00:37:36.974 00:37:36.974 ' 00:37:36.974 09:58:36 nvmf_identify_passthru -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:37:36.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.974 --rc genhtml_branch_coverage=1 00:37:36.974 --rc genhtml_function_coverage=1 00:37:36.974 --rc genhtml_legend=1 00:37:36.974 --rc geninfo_all_blocks=1 00:37:36.974 --rc geninfo_unexecuted_blocks=1 00:37:36.974 00:37:36.974 ' 00:37:36.974 09:58:36 nvmf_identify_passthru -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:37:36.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.974 --rc genhtml_branch_coverage=1 00:37:36.974 --rc genhtml_function_coverage=1 00:37:36.974 --rc genhtml_legend=1 00:37:36.974 --rc geninfo_all_blocks=1 00:37:36.974 --rc geninfo_unexecuted_blocks=1 00:37:36.974 00:37:36.974 ' 00:37:36.974 09:58:36 nvmf_identify_passthru -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:37:36.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.974 --rc genhtml_branch_coverage=1 00:37:36.974 --rc genhtml_function_coverage=1 00:37:36.974 --rc genhtml_legend=1 00:37:36.974 --rc geninfo_all_blocks=1 00:37:36.974 --rc geninfo_unexecuted_blocks=1 00:37:36.974 00:37:36.974 ' 00:37:36.974 09:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:36.974 09:58:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:36.974 09:58:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:36.974 09:58:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:36.974 09:58:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:36.974 09:58:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.974 09:58:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.974 09:58:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.974 09:58:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:36.974 09:58:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:36.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:36.974 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:36.974 09:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:36.974 09:58:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:36.974 09:58:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:36.974 09:58:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:36.974 09:58:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:36.974 09:58:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.974 09:58:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.974 09:58:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.974 09:58:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:36.975 09:58:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.975 09:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:36.975 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:36.975 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:36.975 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:36.975 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:36.975 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:36.975 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.975 09:58:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:36.975 09:58:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.975 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:36.975 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:36.975 09:58:36 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:36.975 09:58:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:43.657 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:43.657 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:43.657 Found net devices under 0000:31:00.0: cvl_0_0 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:43.657 Found net devices under 0000:31:00.1: cvl_0_1 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:43.657 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:43.658 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:43.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:43.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:37:43.919 00:37:43.919 --- 10.0.0.2 ping statistics --- 00:37:43.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.919 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:43.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:43.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:37:43.919 00:37:43.919 --- 10.0.0.1 ping statistics --- 00:37:43.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.919 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:43.919 09:58:43 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:43.919 09:58:43 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:43.919 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:43.919 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:44.180 09:58:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # bdfs=() 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # local bdfs 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=($(get_nvme_bdfs)) 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # get_nvme_bdfs 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1484 -- # bdfs=() 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1484 -- # local bdfs 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1485 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1485 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1485 -- # jq -r '.config[].params.traddr' 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1486 -- # (( 1 == 0 )) 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1490 -- # printf '%s\n' 0000:65:00.0 00:37:44.180 09:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # echo 0000:65:00.0 00:37:44.180 09:58:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:44.180 09:58:43 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:44.180 09:58:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:44.180 09:58:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:44.180 09:58:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:44.752 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:37:44.752 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:44.752 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:44.752 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:45.013 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:45.013 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:45.013 09:58:44 nvmf_identify_passthru -- common/autotest_common.sh@733 -- # xtrace_disable 00:37:45.013 09:58:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.273 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:45.273 09:58:44 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:45.273 09:58:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.273 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3661056 00:37:45.273 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:45.273 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:45.273 09:58:44 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3661056 00:37:45.273 09:58:44 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # '[' -z 3661056 ']' 00:37:45.273 09:58:44 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.273 09:58:44 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local max_retries=100 00:37:45.273 09:58:44 nvmf_identify_passthru -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.273 09:58:44 nvmf_identify_passthru -- common/autotest_common.sh@843 -- # xtrace_disable 00:37:45.273 09:58:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.273 [2024-10-07 09:58:44.759209] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:37:45.273 [2024-10-07 09:58:44.759259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:45.273 [2024-10-07 09:58:44.843847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:45.273 [2024-10-07 09:58:44.915794] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:45.273 [2024-10-07 09:58:44.915840] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:45.273 [2024-10-07 09:58:44.915848] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:45.273 [2024-10-07 09:58:44.915855] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:45.273 [2024-10-07 09:58:44.915862] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:45.273 [2024-10-07 09:58:44.917539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.273 [2024-10-07 09:58:44.917691] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:45.273 [2024-10-07 09:58:44.917779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:37:45.273 [2024-10-07 09:58:44.917779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@867 -- # return 0 00:37:46.216 09:58:45 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.216 INFO: Log level set to 20 00:37:46.216 INFO: Requests: 00:37:46.216 { 00:37:46.216 "jsonrpc": "2.0", 00:37:46.216 "method": "nvmf_set_config", 00:37:46.216 "id": 1, 00:37:46.216 "params": { 00:37:46.216 "admin_cmd_passthru": { 00:37:46.216 "identify_ctrlr": true 00:37:46.216 } 00:37:46.216 } 00:37:46.216 } 00:37:46.216 00:37:46.216 INFO: response: 00:37:46.216 { 00:37:46.216 "jsonrpc": "2.0", 00:37:46.216 "id": 1, 00:37:46.216 "result": true 00:37:46.216 } 00:37:46.216 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:46.216 09:58:45 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.216 INFO: Setting log level to 20 00:37:46.216 INFO: Setting log level to 20 00:37:46.216 INFO: Log level set to 20 00:37:46.216 INFO: Log level set to 20 00:37:46.216 INFO: Requests: 00:37:46.216 { 00:37:46.216 "jsonrpc": "2.0", 00:37:46.216 "method": "framework_start_init", 00:37:46.216 "id": 1 00:37:46.216 } 00:37:46.216 00:37:46.216 INFO: Requests: 00:37:46.216 { 00:37:46.216 "jsonrpc": "2.0", 00:37:46.216 "method": "framework_start_init", 00:37:46.216 "id": 1 00:37:46.216 } 00:37:46.216 00:37:46.216 [2024-10-07 09:58:45.671782] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:46.216 INFO: response: 00:37:46.216 { 00:37:46.216 "jsonrpc": "2.0", 00:37:46.216 "id": 1, 00:37:46.216 "result": true 00:37:46.216 } 00:37:46.216 00:37:46.216 INFO: response: 00:37:46.216 { 00:37:46.216 "jsonrpc": "2.0", 00:37:46.216 "id": 1, 00:37:46.216 "result": true 00:37:46.216 } 00:37:46.216 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:46.216 09:58:45 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.216 INFO: Setting log level to 40 00:37:46.216 INFO: Setting log level to 40 00:37:46.216 INFO: Setting log level to 40 00:37:46.216 [2024-10-07 09:58:45.685354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:46.216 09:58:45 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@733 -- # xtrace_disable 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.216 09:58:45 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:46.216 09:58:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.478 Nvme0n1 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:46.478 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:46.478 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:46.478 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.478 [2024-10-07 09:58:46.077561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:46.478 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.478 [ 00:37:46.478 { 00:37:46.478 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:46.478 "subtype": "Discovery", 00:37:46.478 "listen_addresses": [], 00:37:46.478 "allow_any_host": true, 00:37:46.478 "hosts": [] 00:37:46.478 }, 00:37:46.478 { 00:37:46.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:46.478 "subtype": "NVMe", 00:37:46.478 "listen_addresses": [ 00:37:46.478 { 00:37:46.478 "trtype": "TCP", 00:37:46.478 "adrfam": "IPv4", 00:37:46.478 "traddr": "10.0.0.2", 00:37:46.478 "trsvcid": "4420" 00:37:46.478 } 00:37:46.478 ], 00:37:46.478 "allow_any_host": true, 00:37:46.478 "hosts": [], 00:37:46.478 "serial_number": "SPDK00000000000001", 00:37:46.478 "model_number": "SPDK bdev Controller", 00:37:46.478 "max_namespaces": 1, 00:37:46.478 "min_cntlid": 1, 00:37:46.478 "max_cntlid": 65519, 00:37:46.478 "namespaces": [ 00:37:46.478 { 00:37:46.478 "nsid": 1, 00:37:46.478 "bdev_name": "Nvme0n1", 00:37:46.478 "name": "Nvme0n1", 00:37:46.478 "nguid": "363447305260549900253845000000A3", 00:37:46.478 "uuid": "36344730-5260-5499-0025-3845000000a3" 00:37:46.478 } 00:37:46.478 ] 00:37:46.478 } 00:37:46.478 ] 00:37:46.478 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:46.478 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:46.478 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:46.478 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:46.740 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:37:46.740 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:46.740 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:46.740 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:47.001 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:47.001 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:37:47.001 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:47.001 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@564 -- # xtrace_disable 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:37:47.001 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:47.001 09:58:46 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.001 rmmod nvme_tcp 00:37:47.001 rmmod nvme_fabrics 00:37:47.001 rmmod nvme_keyring 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 3661056 ']' 00:37:47.001 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 3661056 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' -z 3661056 ']' 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # kill -0 3661056 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # uname 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3661056 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3661056' 00:37:47.001 killing process with pid 3661056 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # kill 3661056 00:37:47.001 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@977 -- # wait 3661056 00:37:47.572 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:47.573 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:47.573 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:47.573 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:47.573 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:37:47.573 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:47.573 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:37:47.573 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.573 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.573 09:58:46 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.573 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:47.573 09:58:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.490 09:58:49 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:49.490 00:37:49.490 real 0m13.186s 00:37:49.490 user 0m10.369s 00:37:49.490 sys 0m6.578s 00:37:49.490 09:58:49 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # xtrace_disable 00:37:49.490 09:58:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:49.490 ************************************ 00:37:49.490 END TEST nvmf_identify_passthru 00:37:49.490 ************************************ 00:37:49.490 09:58:49 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:49.490 09:58:49 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:37:49.490 09:58:49 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:37:49.490 09:58:49 -- common/autotest_common.sh@10 -- # set +x 00:37:49.490 ************************************ 00:37:49.490 START TEST nvmf_dif 00:37:49.490 ************************************ 00:37:49.490 09:58:49 nvmf_dif -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:49.751 * Looking for test storage... 00:37:49.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:49.751 09:58:49 nvmf_dif -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:37:49.751 09:58:49 nvmf_dif -- common/autotest_common.sh@1626 -- # lcov --version 00:37:49.751 09:58:49 nvmf_dif -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:37:49.751 09:58:49 nvmf_dif -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.751 09:58:49 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:49.751 09:58:49 nvmf_dif -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.751 09:58:49 nvmf_dif -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:37:49.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.751 --rc genhtml_branch_coverage=1 00:37:49.751 --rc genhtml_function_coverage=1 00:37:49.751 --rc genhtml_legend=1 00:37:49.751 --rc geninfo_all_blocks=1 00:37:49.752 --rc geninfo_unexecuted_blocks=1 00:37:49.752 00:37:49.752 ' 00:37:49.752 09:58:49 nvmf_dif -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:37:49.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.752 --rc genhtml_branch_coverage=1 00:37:49.752 --rc genhtml_function_coverage=1 00:37:49.752 --rc genhtml_legend=1 00:37:49.752 --rc geninfo_all_blocks=1 00:37:49.752 --rc geninfo_unexecuted_blocks=1 00:37:49.752 00:37:49.752 ' 00:37:49.752 09:58:49 nvmf_dif -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:37:49.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.752 --rc genhtml_branch_coverage=1 00:37:49.752 --rc genhtml_function_coverage=1 00:37:49.752 --rc genhtml_legend=1 00:37:49.752 --rc geninfo_all_blocks=1 00:37:49.752 --rc geninfo_unexecuted_blocks=1 00:37:49.752 00:37:49.752 ' 00:37:49.752 09:58:49 nvmf_dif -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:37:49.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.752 --rc genhtml_branch_coverage=1 00:37:49.752 --rc genhtml_function_coverage=1 00:37:49.752 --rc genhtml_legend=1 00:37:49.752 --rc geninfo_all_blocks=1 00:37:49.752 --rc geninfo_unexecuted_blocks=1 00:37:49.752 00:37:49.752 ' 00:37:49.752 09:58:49 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.752 09:58:49 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:49.752 09:58:49 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.752 09:58:49 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.752 09:58:49 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.752 09:58:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.752 09:58:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.752 09:58:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.752 09:58:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:49.752 09:58:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:49.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:49.752 09:58:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:49.752 09:58:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:49.752 09:58:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:49.752 09:58:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:49.752 09:58:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.752 09:58:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:49.752 09:58:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:49.752 09:58:49 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:49.752 09:58:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:57.893 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:57.893 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.893 09:58:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:57.894 Found net devices under 0000:31:00.0: cvl_0_0 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:57.894 Found net devices under 0000:31:00.1: cvl_0_1 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:57.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:57.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:37:57.894 00:37:57.894 --- 10.0.0.2 ping statistics --- 00:37:57.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.894 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:57.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:57.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:37:57.894 00:37:57.894 --- 10.0.0.1 ping statistics --- 00:37:57.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.894 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:37:57.894 09:58:56 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:01.195 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:01.195 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:01.195 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:01.196 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:01.196 09:59:00 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:01.196 09:59:00 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:01.196 09:59:00 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:01.196 09:59:00 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:01.196 09:59:00 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:01.196 09:59:00 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:01.457 09:59:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:01.457 09:59:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:01.457 09:59:00 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:01.457 09:59:00 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:01.457 09:59:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.457 09:59:00 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=3667228 00:38:01.457 09:59:00 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 3667228 00:38:01.457 09:59:00 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:01.457 09:59:00 nvmf_dif -- common/autotest_common.sh@834 -- # '[' -z 3667228 ']' 00:38:01.457 09:59:00 nvmf_dif -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.457 09:59:00 nvmf_dif -- common/autotest_common.sh@839 -- # local max_retries=100 00:38:01.457 09:59:00 nvmf_dif -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:01.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:01.457 09:59:00 nvmf_dif -- common/autotest_common.sh@843 -- # xtrace_disable 00:38:01.457 09:59:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.457 [2024-10-07 09:59:00.962722] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:38:01.457 [2024-10-07 09:59:00.962784] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:01.457 [2024-10-07 09:59:01.052961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.717 [2024-10-07 09:59:01.147636] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:01.717 [2024-10-07 09:59:01.147693] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:01.717 [2024-10-07 09:59:01.147708] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:01.717 [2024-10-07 09:59:01.147716] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:01.717 [2024-10-07 09:59:01.147721] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:01.717 [2024-10-07 09:59:01.148505] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.288 09:59:01 nvmf_dif -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:38:02.288 09:59:01 nvmf_dif -- common/autotest_common.sh@867 -- # return 0 00:38:02.288 09:59:01 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:02.288 09:59:01 nvmf_dif -- common/autotest_common.sh@733 -- # xtrace_disable 00:38:02.288 09:59:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:02.288 09:59:01 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:02.288 09:59:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:02.288 09:59:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:02.288 09:59:01 nvmf_dif -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:02.288 09:59:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:02.288 [2024-10-07 09:59:01.818929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.288 09:59:01 nvmf_dif -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:02.288 09:59:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:02.288 09:59:01 nvmf_dif -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:38:02.288 09:59:01 nvmf_dif -- common/autotest_common.sh@1110 -- # xtrace_disable 00:38:02.288 09:59:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:02.288 ************************************ 00:38:02.288 START TEST fio_dif_1_default 00:38:02.288 ************************************ 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # fio_dif_1 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:02.288 bdev_null0 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:02.288 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:02.288 [2024-10-07 09:59:01.907380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:02.289 { 00:38:02.289 "params": { 00:38:02.289 "name": "Nvme$subsystem", 00:38:02.289 "trtype": "$TEST_TRANSPORT", 00:38:02.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.289 "adrfam": "ipv4", 00:38:02.289 "trsvcid": "$NVMF_PORT", 00:38:02.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.289 "hdgst": ${hdgst:-false}, 00:38:02.289 "ddgst": ${ddgst:-false} 00:38:02.289 }, 00:38:02.289 "method": "bdev_nvme_attach_controller" 00:38:02.289 } 00:38:02.289 EOF 00:38:02.289 )") 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1325 -- # local fio_dir=/usr/src/fio 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1327 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1327 -- # local sanitizers 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1328 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1329 -- # shift 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1331 -- # local asan_lib= 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # grep libasan 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:38:02.289 09:59:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:02.289 "params": { 00:38:02.289 "name": "Nvme0", 00:38:02.289 "trtype": "tcp", 00:38:02.289 "traddr": "10.0.0.2", 00:38:02.289 "adrfam": "ipv4", 00:38:02.289 "trsvcid": "4420", 00:38:02.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:02.289 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:02.289 "hdgst": false, 00:38:02.289 "ddgst": false 00:38:02.289 }, 00:38:02.289 "method": "bdev_nvme_attach_controller" 00:38:02.289 }' 00:38:02.549 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:02.549 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:02.549 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:02.549 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:02.549 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # grep libclang_rt.asan 00:38:02.549 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:02.550 09:59:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:02.550 09:59:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:02.550 09:59:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:02.550 09:59:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:02.810 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:02.811 fio-3.35 00:38:02.811 Starting 1 thread 00:38:15.047 00:38:15.047 filename0: (groupid=0, jobs=1): err= 0: pid=3667745: Mon Oct 7 09:59:12 2024 00:38:15.047 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10039msec) 00:38:15.047 slat (nsec): min=5462, max=34192, avg=6461.05, stdev=1794.69 00:38:15.047 clat (usec): min=40872, max=42987, avg=41123.57, stdev=367.69 00:38:15.047 lat (usec): min=40880, max=42996, avg=41130.03, stdev=368.48 00:38:15.047 clat percentiles (usec): 00:38:15.047 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:15.047 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:15.047 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:38:15.047 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:38:15.047 | 99.99th=[42730] 00:38:15.047 bw ( KiB/s): min= 352, max= 416, per=99.77%, avg=388.80, stdev=15.66, samples=20 00:38:15.047 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:38:15.047 lat (msec) : 50=100.00% 00:38:15.047 cpu : usr=93.03%, sys=6.74%, ctx=13, majf=0, minf=252 00:38:15.047 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:15.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:15.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:15.047 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:15.047 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:15.047 00:38:15.047 Run status group 0 (all jobs): 00:38:15.047 READ: bw=389KiB/s (398kB/s), 389KiB/s-389KiB/s (398kB/s-398kB/s), io=3904KiB (3998kB), run=10039-10039msec 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:15.047 00:38:15.047 real 0m11.169s 00:38:15.047 user 0m19.524s 00:38:15.047 sys 0m1.056s 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 ************************************ 00:38:15.047 END TEST fio_dif_1_default 00:38:15.047 ************************************ 00:38:15.047 09:59:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:15.047 09:59:13 nvmf_dif -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:38:15.047 09:59:13 nvmf_dif -- common/autotest_common.sh@1110 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 ************************************ 00:38:15.047 START TEST fio_dif_1_multi_subsystems 00:38:15.047 ************************************ 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # fio_dif_1_multi_subsystems 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 bdev_null0 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 [2024-10-07 09:59:13.154613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 bdev_null1 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.047 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:15.048 { 00:38:15.048 "params": { 00:38:15.048 "name": "Nvme$subsystem", 00:38:15.048 "trtype": "$TEST_TRANSPORT", 00:38:15.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:15.048 "adrfam": "ipv4", 00:38:15.048 "trsvcid": "$NVMF_PORT", 00:38:15.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:15.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:15.048 "hdgst": ${hdgst:-false}, 00:38:15.048 "ddgst": ${ddgst:-false} 00:38:15.048 }, 00:38:15.048 "method": "bdev_nvme_attach_controller" 00:38:15.048 } 00:38:15.048 EOF 00:38:15.048 )") 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1325 -- # local fio_dir=/usr/src/fio 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1327 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1327 -- # local sanitizers 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1328 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1329 -- # shift 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1331 -- # local asan_lib= 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # grep libasan 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:15.048 { 00:38:15.048 "params": { 00:38:15.048 "name": "Nvme$subsystem", 00:38:15.048 "trtype": "$TEST_TRANSPORT", 00:38:15.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:15.048 "adrfam": "ipv4", 00:38:15.048 "trsvcid": "$NVMF_PORT", 00:38:15.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:15.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:15.048 "hdgst": ${hdgst:-false}, 00:38:15.048 "ddgst": ${ddgst:-false} 00:38:15.048 }, 00:38:15.048 "method": "bdev_nvme_attach_controller" 00:38:15.048 } 00:38:15.048 EOF 00:38:15.048 )") 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:15.048 "params": { 00:38:15.048 "name": "Nvme0", 00:38:15.048 "trtype": "tcp", 00:38:15.048 "traddr": "10.0.0.2", 00:38:15.048 "adrfam": "ipv4", 00:38:15.048 "trsvcid": "4420", 00:38:15.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:15.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:15.048 "hdgst": false, 00:38:15.048 "ddgst": false 00:38:15.048 }, 00:38:15.048 "method": "bdev_nvme_attach_controller" 00:38:15.048 },{ 00:38:15.048 "params": { 00:38:15.048 "name": "Nvme1", 00:38:15.048 "trtype": "tcp", 00:38:15.048 "traddr": "10.0.0.2", 00:38:15.048 "adrfam": "ipv4", 00:38:15.048 "trsvcid": "4420", 00:38:15.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:15.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:15.048 "hdgst": false, 00:38:15.048 "ddgst": false 00:38:15.048 }, 00:38:15.048 "method": "bdev_nvme_attach_controller" 00:38:15.048 }' 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # grep libclang_rt.asan 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:15.048 09:59:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:15.048 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:15.048 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:15.048 fio-3.35 00:38:15.048 Starting 2 threads 00:38:25.047 00:38:25.047 filename0: (groupid=0, jobs=1): err= 0: pid=3670062: Mon Oct 7 09:59:24 2024 00:38:25.047 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10004msec) 00:38:25.047 slat (nsec): min=5490, max=51019, avg=6384.59, stdev=1935.08 00:38:25.047 clat (usec): min=483, max=42328, avg=20954.24, stdev=20321.03 00:38:25.047 lat (usec): min=489, max=42334, avg=20960.63, stdev=20320.96 00:38:25.047 clat percentiles (usec): 00:38:25.047 | 1.00th=[ 506], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 685], 00:38:25.047 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 1500], 60.00th=[41157], 00:38:25.047 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:25.047 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:25.047 | 99.99th=[42206] 00:38:25.047 bw ( KiB/s): min= 704, max= 768, per=66.10%, avg=764.63, stdev=14.68, samples=19 00:38:25.047 iops : min= 176, max= 192, avg=191.16, stdev= 3.67, samples=19 00:38:25.047 lat (usec) : 500=0.58%, 750=48.64%, 1000=0.68% 00:38:25.047 lat (msec) : 2=0.21%, 50=49.90% 00:38:25.047 cpu : usr=95.42%, sys=4.37%, ctx=9, majf=0, minf=191 00:38:25.047 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:25.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.047 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:25.047 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:25.047 filename1: (groupid=0, jobs=1): err= 0: pid=3670063: Mon Oct 7 09:59:24 2024 00:38:25.047 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10009msec) 00:38:25.047 slat (nsec): min=5480, max=58126, avg=6377.61, stdev=2055.90 00:38:25.047 clat (usec): min=592, max=42648, avg=40668.14, stdev=3619.48 00:38:25.047 lat (usec): min=598, max=42706, avg=40674.52, stdev=3619.57 00:38:25.047 clat percentiles (usec): 00:38:25.047 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:25.047 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:25.047 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:25.047 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:38:25.047 | 99.99th=[42730] 00:38:25.047 bw ( KiB/s): min= 384, max= 448, per=33.83%, avg=392.00, stdev=17.60, samples=20 00:38:25.047 iops : min= 96, max= 112, avg=98.00, stdev= 4.40, samples=20 00:38:25.047 lat (usec) : 750=0.41%, 1000=0.41% 00:38:25.047 lat (msec) : 50=99.19% 00:38:25.047 cpu : usr=95.60%, sys=4.16%, ctx=52, majf=0, minf=45 00:38:25.047 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:25.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.047 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:25.047 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:25.047 00:38:25.047 Run status group 0 (all jobs): 00:38:25.047 READ: bw=1156KiB/s (1183kB/s), 393KiB/s-763KiB/s (403kB/s-781kB/s), io=11.3MiB (11.8MB), run=10004-10009msec 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:25.047 00:38:25.047 real 0m11.463s 00:38:25.047 user 0m35.439s 00:38:25.047 sys 0m1.225s 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # xtrace_disable 00:38:25.047 09:59:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:25.047 ************************************ 00:38:25.047 END TEST fio_dif_1_multi_subsystems 00:38:25.047 ************************************ 00:38:25.047 09:59:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:25.047 09:59:24 nvmf_dif -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:38:25.047 09:59:24 nvmf_dif -- common/autotest_common.sh@1110 -- # xtrace_disable 00:38:25.047 09:59:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:25.047 ************************************ 00:38:25.047 START TEST fio_dif_rand_params 00:38:25.047 ************************************ 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # fio_dif_rand_params 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:25.047 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:25.048 bdev_null0 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:25.048 [2024-10-07 09:59:24.698541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:25.048 { 00:38:25.048 "params": { 00:38:25.048 "name": "Nvme$subsystem", 00:38:25.048 "trtype": "$TEST_TRANSPORT", 00:38:25.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:25.048 "adrfam": "ipv4", 00:38:25.048 "trsvcid": "$NVMF_PORT", 00:38:25.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:25.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:25.048 "hdgst": ${hdgst:-false}, 00:38:25.048 "ddgst": ${ddgst:-false} 00:38:25.048 }, 00:38:25.048 "method": "bdev_nvme_attach_controller" 00:38:25.048 } 00:38:25.048 EOF 00:38:25.048 )") 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1325 -- # local fio_dir=/usr/src/fio 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1327 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1327 -- # local sanitizers 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1328 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1329 -- # shift 00:38:25.048 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local asan_lib= 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # grep libasan 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:25.309 "params": { 00:38:25.309 "name": "Nvme0", 00:38:25.309 "trtype": "tcp", 00:38:25.309 "traddr": "10.0.0.2", 00:38:25.309 "adrfam": "ipv4", 00:38:25.309 "trsvcid": "4420", 00:38:25.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:25.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:25.309 "hdgst": false, 00:38:25.309 "ddgst": false 00:38:25.309 }, 00:38:25.309 "method": "bdev_nvme_attach_controller" 00:38:25.309 }' 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # grep libclang_rt.asan 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:25.309 09:59:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:25.571 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:25.571 ... 00:38:25.571 fio-3.35 00:38:25.571 Starting 3 threads 00:38:32.160 00:38:32.160 filename0: (groupid=0, jobs=1): err= 0: pid=3672259: Mon Oct 7 09:59:30 2024 00:38:32.160 read: IOPS=304, BW=38.1MiB/s (39.9MB/s)(192MiB/5047msec) 00:38:32.160 slat (nsec): min=5530, max=32431, avg=7579.93, stdev=1635.68 00:38:32.160 clat (usec): min=3724, max=87815, avg=9805.85, stdev=7910.15 00:38:32.160 lat (usec): min=3733, max=87823, avg=9813.43, stdev=7910.33 00:38:32.160 clat percentiles (usec): 00:38:32.160 | 1.00th=[ 4490], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7111], 00:38:32.160 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:38:32.160 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11207], 00:38:32.160 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50594], 99.95th=[87557], 00:38:32.160 | 99.99th=[87557] 00:38:32.160 bw ( KiB/s): min=16896, max=46848, per=33.80%, avg=39296.00, stdev=8880.61, samples=10 00:38:32.160 iops : min= 132, max= 366, avg=307.00, stdev=69.38, samples=10 00:38:32.160 lat (msec) : 4=0.39%, 10=84.85%, 20=10.79%, 50=3.84%, 100=0.13% 00:38:32.160 cpu : usr=94.07%, sys=5.65%, ctx=10, majf=0, minf=96 00:38:32.160 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:32.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.160 issued rwts: total=1538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.160 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:32.160 filename0: (groupid=0, jobs=1): err= 0: pid=3672260: Mon Oct 7 09:59:30 2024 00:38:32.160 read: IOPS=299, BW=37.4MiB/s (39.2MB/s)(189MiB/5046msec) 00:38:32.160 slat (nsec): min=5496, max=73201, avg=7412.49, stdev=4896.59 00:38:32.160 clat (usec): min=3372, max=89415, avg=9979.44, stdev=7665.13 00:38:32.160 lat (usec): min=3380, max=89424, avg=9986.86, stdev=7665.55 00:38:32.160 clat percentiles (usec): 00:38:32.160 | 1.00th=[ 3916], 5.00th=[ 5473], 10.00th=[ 6325], 20.00th=[ 7308], 00:38:32.160 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:38:32.160 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10683], 95.00th=[11338], 00:38:32.160 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50594], 99.95th=[89654], 00:38:32.160 | 99.99th=[89654] 00:38:32.160 bw ( KiB/s): min=19968, max=45568, per=33.23%, avg=38630.40, stdev=8112.91, samples=10 00:38:32.160 iops : min= 156, max= 356, avg=301.80, stdev=63.38, samples=10 00:38:32.160 lat (msec) : 4=1.26%, 10=77.17%, 20=17.94%, 50=3.51%, 100=0.13% 00:38:32.160 cpu : usr=93.82%, sys=5.91%, ctx=12, majf=0, minf=78 00:38:32.160 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:32.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.160 issued rwts: total=1511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.160 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:32.160 filename0: (groupid=0, jobs=1): err= 0: pid=3672261: Mon Oct 7 09:59:30 2024 00:38:32.160 read: IOPS=304, BW=38.0MiB/s (39.9MB/s)(192MiB/5044msec) 00:38:32.160 slat (usec): min=5, max=162, avg= 8.30, stdev= 5.25 00:38:32.160 clat (usec): min=3765, max=52025, avg=9818.67, stdev=8030.31 00:38:32.160 lat (usec): min=3774, max=52065, avg=9826.96, stdev=8030.96 00:38:32.160 clat percentiles (usec): 00:38:32.160 | 1.00th=[ 4359], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6783], 00:38:32.160 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 8717], 00:38:32.160 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[11469], 00:38:32.160 | 99.00th=[47973], 99.50th=[49021], 99.90th=[52167], 99.95th=[52167], 00:38:32.160 | 99.99th=[52167] 00:38:32.160 bw ( KiB/s): min=20736, max=46080, per=33.76%, avg=39244.80, stdev=7299.92, samples=10 00:38:32.160 iops : min= 162, max= 360, avg=306.60, stdev=57.03, samples=10 00:38:32.160 lat (msec) : 4=0.13%, 10=84.43%, 20=11.21%, 50=3.91%, 100=0.33% 00:38:32.160 cpu : usr=94.59%, sys=5.10%, ctx=36, majf=0, minf=152 00:38:32.160 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:32.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.160 issued rwts: total=1535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.160 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:32.160 00:38:32.160 Run status group 0 (all jobs): 00:38:32.160 READ: bw=114MiB/s (119MB/s), 37.4MiB/s-38.1MiB/s (39.2MB/s-39.9MB/s), io=573MiB (601MB), run=5044-5047msec 00:38:32.160 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:32.160 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:32.161 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 bdev_null0 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 [2024-10-07 09:59:31.043602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 bdev_null1 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 bdev_null2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:32.161 { 00:38:32.161 "params": { 00:38:32.161 "name": "Nvme$subsystem", 00:38:32.161 "trtype": "$TEST_TRANSPORT", 00:38:32.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:32.161 "adrfam": "ipv4", 00:38:32.161 "trsvcid": "$NVMF_PORT", 00:38:32.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:32.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:32.161 "hdgst": ${hdgst:-false}, 00:38:32.161 "ddgst": ${ddgst:-false} 00:38:32.161 }, 00:38:32.161 "method": "bdev_nvme_attach_controller" 00:38:32.161 } 00:38:32.161 EOF 00:38:32.161 )") 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1325 -- # local fio_dir=/usr/src/fio 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1327 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1327 -- # local sanitizers 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1328 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1329 -- # shift 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local asan_lib= 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # grep libasan 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:32.161 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:32.161 { 00:38:32.161 "params": { 00:38:32.161 "name": "Nvme$subsystem", 00:38:32.161 "trtype": "$TEST_TRANSPORT", 00:38:32.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:32.161 "adrfam": "ipv4", 00:38:32.161 "trsvcid": "$NVMF_PORT", 00:38:32.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:32.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:32.161 "hdgst": ${hdgst:-false}, 00:38:32.161 "ddgst": ${ddgst:-false} 00:38:32.161 }, 00:38:32.162 "method": "bdev_nvme_attach_controller" 00:38:32.162 } 00:38:32.162 EOF 00:38:32.162 )") 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:32.162 { 00:38:32.162 "params": { 00:38:32.162 "name": "Nvme$subsystem", 00:38:32.162 "trtype": "$TEST_TRANSPORT", 00:38:32.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:32.162 "adrfam": "ipv4", 00:38:32.162 "trsvcid": "$NVMF_PORT", 00:38:32.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:32.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:32.162 "hdgst": ${hdgst:-false}, 00:38:32.162 "ddgst": ${ddgst:-false} 00:38:32.162 }, 00:38:32.162 "method": "bdev_nvme_attach_controller" 00:38:32.162 } 00:38:32.162 EOF 00:38:32.162 )") 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:32.162 "params": { 00:38:32.162 "name": "Nvme0", 00:38:32.162 "trtype": "tcp", 00:38:32.162 "traddr": "10.0.0.2", 00:38:32.162 "adrfam": "ipv4", 00:38:32.162 "trsvcid": "4420", 00:38:32.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:32.162 "hdgst": false, 00:38:32.162 "ddgst": false 00:38:32.162 }, 00:38:32.162 "method": "bdev_nvme_attach_controller" 00:38:32.162 },{ 00:38:32.162 "params": { 00:38:32.162 "name": "Nvme1", 00:38:32.162 "trtype": "tcp", 00:38:32.162 "traddr": "10.0.0.2", 00:38:32.162 "adrfam": "ipv4", 00:38:32.162 "trsvcid": "4420", 00:38:32.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:32.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:32.162 "hdgst": false, 00:38:32.162 "ddgst": false 00:38:32.162 }, 00:38:32.162 "method": "bdev_nvme_attach_controller" 00:38:32.162 },{ 00:38:32.162 "params": { 00:38:32.162 "name": "Nvme2", 00:38:32.162 "trtype": "tcp", 00:38:32.162 "traddr": "10.0.0.2", 00:38:32.162 "adrfam": "ipv4", 00:38:32.162 "trsvcid": "4420", 00:38:32.162 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:32.162 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:32.162 "hdgst": false, 00:38:32.162 "ddgst": false 00:38:32.162 }, 00:38:32.162 "method": "bdev_nvme_attach_controller" 00:38:32.162 }' 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # grep libclang_rt.asan 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:32.162 09:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:32.162 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:32.162 ... 00:38:32.162 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:32.162 ... 00:38:32.162 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:32.162 ... 00:38:32.162 fio-3.35 00:38:32.162 Starting 24 threads 00:38:44.400 00:38:44.400 filename0: (groupid=0, jobs=1): err= 0: pid=3673769: Mon Oct 7 09:59:42 2024 00:38:44.400 read: IOPS=694, BW=2777KiB/s (2843kB/s)(27.1MiB/10003msec) 00:38:44.400 slat (nsec): min=5625, max=79155, avg=7183.07, stdev=3463.04 00:38:44.400 clat (usec): min=2500, max=30981, avg=22987.03, stdev=2919.01 00:38:44.400 lat (usec): min=2512, max=30988, avg=22994.21, stdev=2918.23 00:38:44.400 clat percentiles (usec): 00:38:44.400 | 1.00th=[10552], 5.00th=[15664], 10.00th=[23200], 20.00th=[23462], 00:38:44.400 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:38:44.400 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.400 | 99.00th=[25297], 99.50th=[25560], 99.90th=[30016], 99.95th=[31065], 00:38:44.400 | 99.99th=[31065] 00:38:44.400 bw ( KiB/s): min= 2682, max= 4304, per=4.29%, avg=2782.00, stdev=370.78, samples=19 00:38:44.400 iops : min= 670, max= 1076, avg=695.47, stdev=92.70, samples=19 00:38:44.400 lat (msec) : 4=0.49%, 10=0.43%, 20=7.92%, 50=91.16% 00:38:44.400 cpu : usr=98.64%, sys=1.02%, ctx=80, majf=0, minf=115 00:38:44.400 IO depths : 1=5.7%, 2=11.5%, 4=23.4%, 8=52.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:44.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.400 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.400 issued rwts: total=6944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.400 filename0: (groupid=0, jobs=1): err= 0: pid=3673770: Mon Oct 7 09:59:42 2024 00:38:44.400 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.2MiB/10003msec) 00:38:44.400 slat (nsec): min=5643, max=73963, avg=20453.24, stdev=12817.36 00:38:44.400 clat (usec): min=6333, max=49940, avg=23612.46, stdev=1632.78 00:38:44.400 lat (usec): min=6339, max=49959, avg=23632.91, stdev=1633.60 00:38:44.400 clat percentiles (usec): 00:38:44.400 | 1.00th=[22152], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:38:44.400 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:44.400 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.400 | 99.00th=[25035], 99.50th=[25560], 99.90th=[43779], 99.95th=[43779], 00:38:44.400 | 99.99th=[50070] 00:38:44.400 bw ( KiB/s): min= 2432, max= 2816, per=4.12%, avg=2674.53, stdev=72.59, samples=19 00:38:44.400 iops : min= 608, max= 704, avg=668.63, stdev=18.15, samples=19 00:38:44.400 lat (msec) : 10=0.24%, 20=0.51%, 50=99.26% 00:38:44.400 cpu : usr=98.52%, sys=1.01%, ctx=138, majf=0, minf=50 00:38:44.400 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.400 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.400 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.400 filename0: (groupid=0, jobs=1): err= 0: pid=3673771: Mon Oct 7 09:59:42 2024 00:38:44.400 read: IOPS=670, BW=2680KiB/s (2745kB/s)(26.2MiB/10005msec) 00:38:44.400 slat (nsec): min=5629, max=89900, avg=15731.95, stdev=14192.85 00:38:44.400 clat (usec): min=17433, max=35509, avg=23753.94, stdev=782.27 00:38:44.400 lat (usec): min=17439, max=35532, avg=23769.67, stdev=780.44 00:38:44.400 clat percentiles (usec): 00:38:44.400 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:44.400 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.400 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.400 | 99.00th=[25297], 99.50th=[25560], 99.90th=[35390], 99.95th=[35390], 00:38:44.400 | 99.99th=[35390] 00:38:44.400 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2681.26, stdev=51.80, samples=19 00:38:44.400 iops : min= 640, max= 704, avg=670.32, stdev=12.95, samples=19 00:38:44.400 lat (msec) : 20=0.27%, 50=99.73% 00:38:44.400 cpu : usr=98.75%, sys=0.85%, ctx=110, majf=0, minf=52 00:38:44.400 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.400 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.400 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.400 filename0: (groupid=0, jobs=1): err= 0: pid=3673772: Mon Oct 7 09:59:42 2024 00:38:44.400 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10005msec) 00:38:44.400 slat (nsec): min=5636, max=93161, avg=20287.26, stdev=14327.03 00:38:44.400 clat (usec): min=12251, max=43837, avg=23841.61, stdev=1876.98 00:38:44.400 lat (usec): min=12256, max=43869, avg=23861.90, stdev=1876.59 00:38:44.400 clat percentiles (usec): 00:38:44.400 | 1.00th=[17171], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:44.400 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.400 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[25297], 00:38:44.400 | 99.00th=[32637], 99.50th=[33162], 99.90th=[35390], 99.95th=[35390], 00:38:44.400 | 99.99th=[43779] 00:38:44.400 bw ( KiB/s): min= 2432, max= 2816, per=4.09%, avg=2654.53, stdev=93.36, samples=19 00:38:44.400 iops : min= 608, max= 704, avg=663.63, stdev=23.34, samples=19 00:38:44.400 lat (msec) : 20=2.01%, 50=97.99% 00:38:44.400 cpu : usr=98.75%, sys=0.90%, ctx=54, majf=0, minf=71 00:38:44.400 IO depths : 1=5.8%, 2=11.5%, 4=23.5%, 8=52.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:44.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 issued rwts: total=6670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.401 filename0: (groupid=0, jobs=1): err= 0: pid=3673773: Mon Oct 7 09:59:42 2024 00:38:44.401 read: IOPS=669, BW=2679KiB/s (2743kB/s)(26.2MiB/10009msec) 00:38:44.401 slat (nsec): min=5694, max=76332, avg=18728.12, stdev=12613.23 00:38:44.401 clat (usec): min=16030, max=39791, avg=23737.55, stdev=984.43 00:38:44.401 lat (usec): min=16042, max=39818, avg=23756.28, stdev=983.96 00:38:44.401 clat percentiles (usec): 00:38:44.401 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:44.401 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.401 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.401 | 99.00th=[25035], 99.50th=[25297], 99.90th=[39584], 99.95th=[39584], 00:38:44.401 | 99.99th=[39584] 00:38:44.401 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2681.26, stdev=51.80, samples=19 00:38:44.401 iops : min= 640, max= 704, avg=670.32, stdev=12.95, samples=19 00:38:44.401 lat (msec) : 20=0.24%, 50=99.76% 00:38:44.401 cpu : usr=99.09%, sys=0.62%, ctx=14, majf=0, minf=62 00:38:44.401 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:44.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.401 filename0: (groupid=0, jobs=1): err= 0: pid=3673774: Mon Oct 7 09:59:42 2024 00:38:44.401 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10008msec) 00:38:44.401 slat (nsec): min=5638, max=70084, avg=20563.03, stdev=12481.73 00:38:44.401 clat (usec): min=7924, max=38498, avg=23798.67, stdev=1610.57 00:38:44.401 lat (usec): min=7949, max=38518, avg=23819.24, stdev=1610.51 00:38:44.401 clat percentiles (usec): 00:38:44.401 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:44.401 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.401 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:38:44.401 | 99.00th=[31065], 99.50th=[32637], 99.90th=[38536], 99.95th=[38536], 00:38:44.401 | 99.99th=[38536] 00:38:44.401 bw ( KiB/s): min= 2432, max= 2816, per=4.10%, avg=2661.05, stdev=91.30, samples=19 00:38:44.401 iops : min= 608, max= 704, avg=665.26, stdev=22.83, samples=19 00:38:44.401 lat (msec) : 10=0.15%, 20=0.33%, 50=99.52% 00:38:44.401 cpu : usr=97.73%, sys=1.37%, ctx=680, majf=0, minf=50 00:38:44.401 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.401 filename0: (groupid=0, jobs=1): err= 0: pid=3673775: Mon Oct 7 09:59:42 2024 00:38:44.401 read: IOPS=669, BW=2680KiB/s (2744kB/s)(26.2MiB/10006msec) 00:38:44.401 slat (nsec): min=5645, max=70191, avg=13544.28, stdev=9087.68 00:38:44.401 clat (usec): min=14033, max=40640, avg=23777.74, stdev=1157.60 00:38:44.401 lat (usec): min=14039, max=40663, avg=23791.29, stdev=1157.65 00:38:44.401 clat percentiles (usec): 00:38:44.401 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:44.401 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:38:44.401 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.401 | 99.00th=[25297], 99.50th=[25822], 99.90th=[40633], 99.95th=[40633], 00:38:44.401 | 99.99th=[40633] 00:38:44.401 bw ( KiB/s): min= 2560, max= 2800, per=4.13%, avg=2680.95, stdev=49.81, samples=19 00:38:44.401 iops : min= 640, max= 700, avg=670.21, stdev=12.45, samples=19 00:38:44.401 lat (msec) : 20=0.66%, 50=99.34% 00:38:44.401 cpu : usr=98.94%, sys=0.78%, ctx=25, majf=0, minf=70 00:38:44.401 IO depths : 1=2.5%, 2=8.7%, 4=24.9%, 8=53.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:38:44.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.401 filename0: (groupid=0, jobs=1): err= 0: pid=3673776: Mon Oct 7 09:59:42 2024 00:38:44.401 read: IOPS=672, BW=2690KiB/s (2754kB/s)(26.3MiB/10018msec) 00:38:44.401 slat (nsec): min=5685, max=67269, avg=18014.22, stdev=11115.26 00:38:44.401 clat (usec): min=6668, max=26940, avg=23636.15, stdev=1111.91 00:38:44.401 lat (usec): min=6680, max=26987, avg=23654.16, stdev=1111.67 00:38:44.401 clat percentiles (usec): 00:38:44.401 | 1.00th=[22152], 5.00th=[23200], 10.00th=[23200], 20.00th=[23200], 00:38:44.401 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.401 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.401 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26870], 99.95th=[26870], 00:38:44.401 | 99.99th=[26870] 00:38:44.401 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2688.00, stdev=41.53, samples=20 00:38:44.401 iops : min= 640, max= 704, avg=672.00, stdev=10.38, samples=20 00:38:44.401 lat (msec) : 10=0.24%, 20=0.48%, 50=99.29% 00:38:44.401 cpu : usr=98.87%, sys=0.87%, ctx=8, majf=0, minf=56 00:38:44.401 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.401 filename1: (groupid=0, jobs=1): err= 0: pid=3673777: Mon Oct 7 09:59:42 2024 00:38:44.401 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10003msec) 00:38:44.401 slat (nsec): min=5649, max=75053, avg=16040.14, stdev=11958.57 00:38:44.401 clat (usec): min=7234, max=36677, avg=23751.08, stdev=959.09 00:38:44.401 lat (usec): min=7241, max=36708, avg=23767.12, stdev=958.30 00:38:44.401 clat percentiles (usec): 00:38:44.401 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:44.401 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.401 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.401 | 99.00th=[25297], 99.50th=[25560], 99.90th=[36439], 99.95th=[36439], 00:38:44.401 | 99.99th=[36439] 00:38:44.401 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2681.26, stdev=62.27, samples=19 00:38:44.401 iops : min= 640, max= 704, avg=670.32, stdev=15.57, samples=19 00:38:44.401 lat (msec) : 10=0.03%, 20=0.51%, 50=99.46% 00:38:44.401 cpu : usr=98.38%, sys=1.14%, ctx=157, majf=0, minf=48 00:38:44.401 IO depths : 1=2.5%, 2=8.8%, 4=25.0%, 8=53.7%, 16=10.0%, 32=0.0%, >=64=0.0% 00:38:44.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.401 filename1: (groupid=0, jobs=1): err= 0: pid=3673778: Mon Oct 7 09:59:42 2024 00:38:44.401 read: IOPS=667, BW=2670KiB/s (2734kB/s)(26.1MiB/10013msec) 00:38:44.401 slat (nsec): min=5630, max=91515, avg=23177.74, stdev=15537.17 00:38:44.401 clat (usec): min=13083, max=40142, avg=23764.00, stdev=1908.83 00:38:44.401 lat (usec): min=13094, max=40169, avg=23787.18, stdev=1908.45 00:38:44.401 clat percentiles (usec): 00:38:44.401 | 1.00th=[16581], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:38:44.401 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:44.401 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[25035], 00:38:44.401 | 99.00th=[32375], 99.50th=[33817], 99.90th=[40109], 99.95th=[40109], 00:38:44.401 | 99.99th=[40109] 00:38:44.401 bw ( KiB/s): min= 2432, max= 2912, per=4.11%, avg=2667.20, stdev=99.73, samples=20 00:38:44.401 iops : min= 608, max= 728, avg=666.80, stdev=24.93, samples=20 00:38:44.401 lat (msec) : 20=2.17%, 50=97.83% 00:38:44.401 cpu : usr=98.21%, sys=1.22%, ctx=215, majf=0, minf=53 00:38:44.401 IO depths : 1=6.1%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:44.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 issued rwts: total=6684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.401 filename1: (groupid=0, jobs=1): err= 0: pid=3673779: Mon Oct 7 09:59:42 2024 00:38:44.401 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10004msec) 00:38:44.401 slat (nsec): min=5475, max=94541, avg=20584.57, stdev=14422.60 00:38:44.401 clat (usec): min=7183, max=44015, avg=23623.00, stdev=2011.89 00:38:44.401 lat (usec): min=7189, max=44031, avg=23643.58, stdev=2012.11 00:38:44.401 clat percentiles (usec): 00:38:44.401 | 1.00th=[16057], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:38:44.401 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:44.401 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:38:44.401 | 99.00th=[30278], 99.50th=[31065], 99.90th=[43779], 99.95th=[43779], 00:38:44.401 | 99.99th=[43779] 00:38:44.401 bw ( KiB/s): min= 2432, max= 2816, per=4.12%, avg=2671.16, stdev=79.37, samples=19 00:38:44.401 iops : min= 608, max= 704, avg=667.79, stdev=19.84, samples=19 00:38:44.401 lat (msec) : 10=0.48%, 20=1.53%, 50=97.99% 00:38:44.401 cpu : usr=98.93%, sys=0.78%, ctx=20, majf=0, minf=50 00:38:44.401 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:44.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.401 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.401 filename1: (groupid=0, jobs=1): err= 0: pid=3673780: Mon Oct 7 09:59:42 2024 00:38:44.401 read: IOPS=672, BW=2692KiB/s (2756kB/s)(26.3MiB/10001msec) 00:38:44.401 slat (nsec): min=5469, max=92549, avg=22912.79, stdev=16246.54 00:38:44.401 clat (usec): min=5912, max=47341, avg=23573.89, stdev=2220.36 00:38:44.401 lat (usec): min=5918, max=47360, avg=23596.80, stdev=2220.84 00:38:44.401 clat percentiles (usec): 00:38:44.401 | 1.00th=[15533], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:38:44.401 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.401 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.401 | 99.00th=[30278], 99.50th=[35914], 99.90th=[47449], 99.95th=[47449], 00:38:44.401 | 99.99th=[47449] 00:38:44.402 bw ( KiB/s): min= 2432, max= 2816, per=4.13%, avg=2678.74, stdev=75.34, samples=19 00:38:44.402 iops : min= 608, max= 704, avg=669.68, stdev=18.84, samples=19 00:38:44.402 lat (msec) : 10=0.24%, 20=2.67%, 50=97.09% 00:38:44.402 cpu : usr=98.72%, sys=0.91%, ctx=90, majf=0, minf=80 00:38:44.402 IO depths : 1=4.8%, 2=9.8%, 4=20.3%, 8=56.5%, 16=8.5%, 32=0.0%, >=64=0.0% 00:38:44.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 complete : 0=0.0%, 4=93.1%, 8=1.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 issued rwts: total=6730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.402 filename1: (groupid=0, jobs=1): err= 0: pid=3673781: Mon Oct 7 09:59:42 2024 00:38:44.402 read: IOPS=672, BW=2691KiB/s (2756kB/s)(26.3MiB/10002msec) 00:38:44.402 slat (nsec): min=5677, max=96293, avg=22528.78, stdev=16733.69 00:38:44.402 clat (usec): min=2261, max=46672, avg=23568.35, stdev=1931.27 00:38:44.402 lat (usec): min=2267, max=46688, avg=23590.88, stdev=1931.02 00:38:44.402 clat percentiles (usec): 00:38:44.402 | 1.00th=[21365], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:38:44.402 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:44.402 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.402 | 99.00th=[25035], 99.50th=[25297], 99.90th=[46400], 99.95th=[46400], 00:38:44.402 | 99.99th=[46924] 00:38:44.402 bw ( KiB/s): min= 2436, max= 2816, per=4.12%, avg=2674.74, stdev=71.85, samples=19 00:38:44.402 iops : min= 609, max= 704, avg=668.68, stdev=17.96, samples=19 00:38:44.402 lat (msec) : 4=0.24%, 10=0.30%, 20=0.39%, 50=99.08% 00:38:44.402 cpu : usr=98.65%, sys=0.92%, ctx=54, majf=0, minf=73 00:38:44.402 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:44.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 issued rwts: total=6730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.402 filename1: (groupid=0, jobs=1): err= 0: pid=3673782: Mon Oct 7 09:59:42 2024 00:38:44.402 read: IOPS=670, BW=2680KiB/s (2745kB/s)(26.2MiB/10005msec) 00:38:44.402 slat (usec): min=5, max=101, avg=10.73, stdev= 9.39 00:38:44.402 clat (usec): min=12327, max=41721, avg=23791.06, stdev=869.89 00:38:44.402 lat (usec): min=12337, max=41744, avg=23801.79, stdev=869.39 00:38:44.402 clat percentiles (usec): 00:38:44.402 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:38:44.402 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:38:44.402 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.402 | 99.00th=[25297], 99.50th=[25560], 99.90th=[35390], 99.95th=[35390], 00:38:44.402 | 99.99th=[41681] 00:38:44.402 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2681.26, stdev=51.80, samples=19 00:38:44.402 iops : min= 640, max= 704, avg=670.32, stdev=12.95, samples=19 00:38:44.402 lat (msec) : 20=0.36%, 50=99.64% 00:38:44.402 cpu : usr=98.84%, sys=0.75%, ctx=51, majf=0, minf=51 00:38:44.402 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:44.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.402 filename1: (groupid=0, jobs=1): err= 0: pid=3673783: Mon Oct 7 09:59:42 2024 00:38:44.402 read: IOPS=675, BW=2701KiB/s (2766kB/s)(26.4MiB/10003msec) 00:38:44.402 slat (nsec): min=5541, max=97142, avg=23785.91, stdev=17067.61 00:38:44.402 clat (usec): min=2931, max=47276, avg=23456.78, stdev=2251.39 00:38:44.402 lat (usec): min=2937, max=47296, avg=23480.56, stdev=2252.65 00:38:44.402 clat percentiles (usec): 00:38:44.402 | 1.00th=[15664], 5.00th=[22414], 10.00th=[22938], 20.00th=[23200], 00:38:44.402 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:44.402 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.402 | 99.00th=[29492], 99.50th=[32637], 99.90th=[47449], 99.95th=[47449], 00:38:44.402 | 99.99th=[47449] 00:38:44.402 bw ( KiB/s): min= 2436, max= 2896, per=4.14%, avg=2686.53, stdev=85.16, samples=19 00:38:44.402 iops : min= 609, max= 724, avg=671.63, stdev=21.29, samples=19 00:38:44.402 lat (msec) : 4=0.10%, 10=0.33%, 20=3.11%, 50=96.46% 00:38:44.402 cpu : usr=98.69%, sys=1.03%, ctx=26, majf=0, minf=56 00:38:44.402 IO depths : 1=5.5%, 2=11.1%, 4=22.8%, 8=53.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:38:44.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 issued rwts: total=6755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.402 filename1: (groupid=0, jobs=1): err= 0: pid=3673784: Mon Oct 7 09:59:42 2024 00:38:44.402 read: IOPS=698, BW=2795KiB/s (2862kB/s)(27.3MiB/10003msec) 00:38:44.402 slat (nsec): min=5606, max=94236, avg=19876.71, stdev=15614.04 00:38:44.402 clat (usec): min=2469, max=46670, avg=22747.33, stdev=3307.78 00:38:44.402 lat (usec): min=2475, max=46691, avg=22767.21, stdev=3311.08 00:38:44.402 clat percentiles (usec): 00:38:44.402 | 1.00th=[12256], 5.00th=[15926], 10.00th=[16909], 20.00th=[23200], 00:38:44.402 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:44.402 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.402 | 99.00th=[33162], 99.50th=[36439], 99.90th=[46400], 99.95th=[46400], 00:38:44.402 | 99.99th=[46924] 00:38:44.402 bw ( KiB/s): min= 2420, max= 3504, per=4.29%, avg=2784.21, stdev=249.60, samples=19 00:38:44.402 iops : min= 605, max= 876, avg=696.05, stdev=62.40, samples=19 00:38:44.402 lat (msec) : 4=0.14%, 10=0.46%, 20=12.90%, 50=86.49% 00:38:44.402 cpu : usr=98.92%, sys=0.82%, ctx=11, majf=0, minf=85 00:38:44.402 IO depths : 1=0.8%, 2=5.8%, 4=21.0%, 8=60.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:38:44.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 complete : 0=0.0%, 4=93.3%, 8=1.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 issued rwts: total=6990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.402 filename2: (groupid=0, jobs=1): err= 0: pid=3673785: Mon Oct 7 09:59:42 2024 00:38:44.402 read: IOPS=675, BW=2704KiB/s (2769kB/s)(26.5MiB/10021msec) 00:38:44.402 slat (nsec): min=5640, max=76320, avg=15024.87, stdev=11967.99 00:38:44.402 clat (usec): min=3672, max=27618, avg=23556.28, stdev=1808.96 00:38:44.402 lat (usec): min=3715, max=27629, avg=23571.30, stdev=1808.55 00:38:44.402 clat percentiles (usec): 00:38:44.402 | 1.00th=[11994], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:44.402 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.402 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:38:44.402 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26870], 99.95th=[27657], 00:38:44.402 | 99.99th=[27657] 00:38:44.402 bw ( KiB/s): min= 2560, max= 2992, per=4.17%, avg=2702.90, stdev=79.73, samples=20 00:38:44.402 iops : min= 640, max= 748, avg=675.70, stdev=19.94, samples=20 00:38:44.402 lat (msec) : 4=0.21%, 10=0.37%, 20=0.84%, 50=98.58% 00:38:44.402 cpu : usr=98.80%, sys=0.77%, ctx=57, majf=0, minf=48 00:38:44.402 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 issued rwts: total=6774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.402 filename2: (groupid=0, jobs=1): err= 0: pid=3673786: Mon Oct 7 09:59:42 2024 00:38:44.402 read: IOPS=719, BW=2879KiB/s (2948kB/s)(28.2MiB/10023msec) 00:38:44.402 slat (nsec): min=5623, max=68051, avg=9705.47, stdev=6741.71 00:38:44.402 clat (usec): min=2623, max=34318, avg=22143.53, stdev=3568.34 00:38:44.402 lat (usec): min=2647, max=34327, avg=22153.23, stdev=3568.96 00:38:44.402 clat percentiles (usec): 00:38:44.402 | 1.00th=[10552], 5.00th=[15533], 10.00th=[16188], 20.00th=[20317], 00:38:44.402 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:44.402 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:38:44.402 | 99.00th=[24773], 99.50th=[30278], 99.90th=[32375], 99.95th=[32900], 00:38:44.402 | 99.99th=[34341] 00:38:44.402 bw ( KiB/s): min= 2688, max= 3968, per=4.44%, avg=2880.90, stdev=437.35, samples=20 00:38:44.402 iops : min= 672, max= 992, avg=720.20, stdev=109.34, samples=20 00:38:44.402 lat (msec) : 4=0.32%, 10=0.57%, 20=18.91%, 50=80.21% 00:38:44.402 cpu : usr=98.92%, sys=0.70%, ctx=79, majf=0, minf=68 00:38:44.402 IO depths : 1=2.6%, 2=7.7%, 4=21.4%, 8=58.3%, 16=9.9%, 32=0.0%, >=64=0.0% 00:38:44.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 issued rwts: total=7214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.402 filename2: (groupid=0, jobs=1): err= 0: pid=3673787: Mon Oct 7 09:59:42 2024 00:38:44.402 read: IOPS=671, BW=2685KiB/s (2749kB/s)(26.2MiB/10011msec) 00:38:44.402 slat (nsec): min=5648, max=63826, avg=14577.48, stdev=10364.96 00:38:44.402 clat (usec): min=11391, max=32385, avg=23712.63, stdev=863.88 00:38:44.402 lat (usec): min=11397, max=32391, avg=23727.21, stdev=863.32 00:38:44.402 clat percentiles (usec): 00:38:44.402 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:38:44.402 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.402 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.402 | 99.00th=[25297], 99.50th=[25560], 99.90th=[28967], 99.95th=[30540], 00:38:44.402 | 99.99th=[32375] 00:38:44.402 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2681.26, stdev=51.80, samples=19 00:38:44.402 iops : min= 640, max= 704, avg=670.32, stdev=12.95, samples=19 00:38:44.402 lat (msec) : 20=0.77%, 50=99.23% 00:38:44.402 cpu : usr=98.93%, sys=0.78%, ctx=41, majf=0, minf=45 00:38:44.402 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.402 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.402 filename2: (groupid=0, jobs=1): err= 0: pid=3673788: Mon Oct 7 09:59:42 2024 00:38:44.402 read: IOPS=670, BW=2681KiB/s (2746kB/s)(26.2MiB/10004msec) 00:38:44.402 slat (nsec): min=5614, max=97374, avg=20515.44, stdev=15612.74 00:38:44.403 clat (usec): min=12651, max=39183, avg=23683.65, stdev=1431.65 00:38:44.403 lat (usec): min=12661, max=39208, avg=23704.17, stdev=1431.13 00:38:44.403 clat percentiles (usec): 00:38:44.403 | 1.00th=[18482], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:38:44.403 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:44.403 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.403 | 99.00th=[29492], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:38:44.403 | 99.99th=[39060] 00:38:44.403 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2683.47, stdev=53.28, samples=19 00:38:44.403 iops : min= 640, max= 704, avg=670.84, stdev=13.32, samples=19 00:38:44.403 lat (msec) : 20=1.63%, 50=98.37% 00:38:44.403 cpu : usr=98.44%, sys=1.12%, ctx=168, majf=0, minf=68 00:38:44.403 IO depths : 1=5.7%, 2=11.5%, 4=23.8%, 8=52.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:44.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.403 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.403 issued rwts: total=6706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.403 filename2: (groupid=0, jobs=1): err= 0: pid=3673789: Mon Oct 7 09:59:42 2024 00:38:44.403 read: IOPS=669, BW=2679KiB/s (2743kB/s)(26.2MiB/10009msec) 00:38:44.403 slat (nsec): min=5771, max=74043, avg=20334.07, stdev=12317.92 00:38:44.403 clat (usec): min=15990, max=39762, avg=23712.91, stdev=988.41 00:38:44.403 lat (usec): min=15999, max=39789, avg=23733.24, stdev=988.45 00:38:44.403 clat percentiles (usec): 00:38:44.403 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:44.403 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.403 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.403 | 99.00th=[25035], 99.50th=[25560], 99.90th=[39584], 99.95th=[39584], 00:38:44.403 | 99.99th=[39584] 00:38:44.403 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2681.26, stdev=51.80, samples=19 00:38:44.403 iops : min= 640, max= 704, avg=670.32, stdev=12.95, samples=19 00:38:44.403 lat (msec) : 20=0.27%, 50=99.73% 00:38:44.403 cpu : usr=98.34%, sys=1.05%, ctx=131, majf=0, minf=64 00:38:44.403 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.403 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.403 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.403 filename2: (groupid=0, jobs=1): err= 0: pid=3673790: Mon Oct 7 09:59:42 2024 00:38:44.403 read: IOPS=669, BW=2680KiB/s (2744kB/s)(26.2MiB/10006msec) 00:38:44.403 slat (nsec): min=5621, max=62987, avg=18108.39, stdev=11307.49 00:38:44.403 clat (usec): min=11107, max=53541, avg=23706.49, stdev=1328.84 00:38:44.403 lat (usec): min=11113, max=53561, avg=23724.60, stdev=1328.75 00:38:44.403 clat percentiles (usec): 00:38:44.403 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:44.403 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.403 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:44.403 | 99.00th=[25035], 99.50th=[25560], 99.90th=[42730], 99.95th=[42730], 00:38:44.403 | 99.99th=[53740] 00:38:44.403 bw ( KiB/s): min= 2432, max= 2816, per=4.12%, avg=2674.21, stdev=72.54, samples=19 00:38:44.403 iops : min= 608, max= 704, avg=668.53, stdev=18.13, samples=19 00:38:44.403 lat (msec) : 20=0.51%, 50=99.46%, 100=0.03% 00:38:44.403 cpu : usr=98.94%, sys=0.79%, ctx=11, majf=0, minf=48 00:38:44.403 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.403 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.403 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.403 filename2: (groupid=0, jobs=1): err= 0: pid=3673791: Mon Oct 7 09:59:42 2024 00:38:44.403 read: IOPS=701, BW=2804KiB/s (2872kB/s)(27.4MiB/10013msec) 00:38:44.403 slat (nsec): min=5609, max=72522, avg=10128.36, stdev=7277.92 00:38:44.403 clat (usec): min=9619, max=47254, avg=22764.90, stdev=4142.98 00:38:44.403 lat (usec): min=9639, max=47277, avg=22775.02, stdev=4143.30 00:38:44.403 clat percentiles (usec): 00:38:44.403 | 1.00th=[13435], 5.00th=[15533], 10.00th=[16909], 20.00th=[19792], 00:38:44.403 | 30.00th=[20841], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:44.403 | 70.00th=[23987], 80.00th=[24511], 90.00th=[27395], 95.00th=[29230], 00:38:44.403 | 99.00th=[34341], 99.50th=[36963], 99.90th=[46924], 99.95th=[47449], 00:38:44.403 | 99.99th=[47449] 00:38:44.403 bw ( KiB/s): min= 2528, max= 3024, per=4.32%, avg=2803.70, stdev=131.23, samples=20 00:38:44.403 iops : min= 632, max= 756, avg=700.90, stdev=32.81, samples=20 00:38:44.403 lat (msec) : 10=0.23%, 20=20.77%, 50=79.00% 00:38:44.403 cpu : usr=98.29%, sys=1.27%, ctx=79, majf=0, minf=83 00:38:44.403 IO depths : 1=0.4%, 2=0.8%, 4=4.3%, 8=79.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:38:44.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.403 complete : 0=0.0%, 4=89.1%, 8=8.1%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.403 issued rwts: total=7020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.403 filename2: (groupid=0, jobs=1): err= 0: pid=3673792: Mon Oct 7 09:59:42 2024 00:38:44.403 read: IOPS=681, BW=2727KiB/s (2793kB/s)(26.6MiB/10003msec) 00:38:44.403 slat (nsec): min=5608, max=84705, avg=18767.05, stdev=14231.73 00:38:44.403 clat (usec): min=2487, max=47046, avg=23316.26, stdev=3109.07 00:38:44.403 lat (usec): min=2500, max=47063, avg=23335.03, stdev=3109.81 00:38:44.403 clat percentiles (usec): 00:38:44.403 | 1.00th=[13173], 5.00th=[18220], 10.00th=[20055], 20.00th=[23200], 00:38:44.403 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:44.403 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[27395], 00:38:44.403 | 99.00th=[33817], 99.50th=[35914], 99.90th=[46924], 99.95th=[46924], 00:38:44.403 | 99.99th=[46924] 00:38:44.403 bw ( KiB/s): min= 2436, max= 2976, per=4.18%, avg=2714.32, stdev=101.45, samples=19 00:38:44.403 iops : min= 609, max= 744, avg=678.58, stdev=25.36, samples=19 00:38:44.403 lat (msec) : 4=0.09%, 10=0.47%, 20=9.09%, 50=90.35% 00:38:44.403 cpu : usr=98.68%, sys=1.02%, ctx=72, majf=0, minf=67 00:38:44.403 IO depths : 1=3.5%, 2=7.2%, 4=15.8%, 8=63.1%, 16=10.4%, 32=0.0%, >=64=0.0% 00:38:44.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.403 complete : 0=0.0%, 4=91.9%, 8=3.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.403 issued rwts: total=6820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.403 00:38:44.403 Run status group 0 (all jobs): 00:38:44.403 READ: bw=63.3MiB/s (66.4MB/s), 2667KiB/s-2879KiB/s (2731kB/s-2948kB/s), io=635MiB (666MB), run=10001-10023msec 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:44.403 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.404 bdev_null0 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.404 [2024-10-07 09:59:42.772313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.404 bdev_null1 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:44.404 { 00:38:44.404 "params": { 00:38:44.404 "name": "Nvme$subsystem", 00:38:44.404 "trtype": "$TEST_TRANSPORT", 00:38:44.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.404 "adrfam": "ipv4", 00:38:44.404 "trsvcid": "$NVMF_PORT", 00:38:44.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.404 "hdgst": ${hdgst:-false}, 00:38:44.404 "ddgst": ${ddgst:-false} 00:38:44.404 }, 00:38:44.404 "method": "bdev_nvme_attach_controller" 00:38:44.404 } 00:38:44.404 EOF 00:38:44.404 )") 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1325 -- # local fio_dir=/usr/src/fio 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1327 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1327 -- # local sanitizers 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1328 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1329 -- # shift 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local asan_lib= 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # grep libasan 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:44.404 { 00:38:44.404 "params": { 00:38:44.404 "name": "Nvme$subsystem", 00:38:44.404 "trtype": "$TEST_TRANSPORT", 00:38:44.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.404 "adrfam": "ipv4", 00:38:44.404 "trsvcid": "$NVMF_PORT", 00:38:44.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.404 "hdgst": ${hdgst:-false}, 00:38:44.404 "ddgst": ${ddgst:-false} 00:38:44.404 }, 00:38:44.404 "method": "bdev_nvme_attach_controller" 00:38:44.404 } 00:38:44.404 EOF 00:38:44.404 )") 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:44.404 "params": { 00:38:44.404 "name": "Nvme0", 00:38:44.404 "trtype": "tcp", 00:38:44.404 "traddr": "10.0.0.2", 00:38:44.404 "adrfam": "ipv4", 00:38:44.404 "trsvcid": "4420", 00:38:44.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:44.404 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:44.404 "hdgst": false, 00:38:44.404 "ddgst": false 00:38:44.404 }, 00:38:44.404 "method": "bdev_nvme_attach_controller" 00:38:44.404 },{ 00:38:44.404 "params": { 00:38:44.404 "name": "Nvme1", 00:38:44.404 "trtype": "tcp", 00:38:44.404 "traddr": "10.0.0.2", 00:38:44.404 "adrfam": "ipv4", 00:38:44.404 "trsvcid": "4420", 00:38:44.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:44.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:44.404 "hdgst": false, 00:38:44.404 "ddgst": false 00:38:44.404 }, 00:38:44.404 "method": "bdev_nvme_attach_controller" 00:38:44.404 }' 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # grep libclang_rt.asan 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:44.404 09:59:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.404 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:44.404 ... 00:38:44.404 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:44.404 ... 00:38:44.404 fio-3.35 00:38:44.404 Starting 4 threads 00:38:49.696 00:38:49.696 filename0: (groupid=0, jobs=1): err= 0: pid=3676116: Mon Oct 7 09:59:49 2024 00:38:49.696 read: IOPS=2993, BW=23.4MiB/s (24.5MB/s)(117MiB/5001msec) 00:38:49.696 slat (nsec): min=5468, max=36721, avg=6262.89, stdev=2127.54 00:38:49.696 clat (usec): min=797, max=4827, avg=2657.70, stdev=222.43 00:38:49.696 lat (usec): min=802, max=4833, avg=2663.96, stdev=222.05 00:38:49.696 clat percentiles (usec): 00:38:49.696 | 1.00th=[ 1647], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:38:49.696 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:38:49.696 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2900], 00:38:49.696 | 99.00th=[ 3425], 99.50th=[ 3818], 99.90th=[ 4178], 99.95th=[ 4228], 00:38:49.696 | 99.99th=[ 4817] 00:38:49.696 bw ( KiB/s): min=23744, max=24737, per=25.19%, avg=23945.00, stdev=303.07, samples=9 00:38:49.696 iops : min= 2968, max= 3092, avg=2993.11, stdev=37.84, samples=9 00:38:49.696 lat (usec) : 1000=0.02% 00:38:49.696 lat (msec) : 2=1.55%, 4=98.15%, 10=0.28% 00:38:49.696 cpu : usr=96.50%, sys=3.26%, ctx=5, majf=0, minf=9 00:38:49.696 IO depths : 1=0.1%, 2=0.2%, 4=68.4%, 8=31.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:49.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.696 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.696 issued rwts: total=14968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:49.696 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:49.696 filename0: (groupid=0, jobs=1): err= 0: pid=3676118: Mon Oct 7 09:59:49 2024 00:38:49.696 read: IOPS=2938, BW=23.0MiB/s (24.1MB/s)(115MiB/5001msec) 00:38:49.696 slat (nsec): min=5457, max=29105, avg=6001.51, stdev=1649.10 00:38:49.696 clat (usec): min=1092, max=45419, avg=2706.03, stdev=1012.67 00:38:49.696 lat (usec): min=1098, max=45449, avg=2712.03, stdev=1012.83 00:38:49.696 clat percentiles (usec): 00:38:49.696 | 1.00th=[ 2278], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:38:49.696 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:38:49.696 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2933], 00:38:49.696 | 99.00th=[ 3720], 99.50th=[ 3982], 99.90th=[ 4359], 99.95th=[45351], 00:38:49.696 | 99.99th=[45351] 00:38:49.696 bw ( KiB/s): min=21632, max=23808, per=24.71%, avg=23482.67, stdev=700.31, samples=9 00:38:49.696 iops : min= 2704, max= 2976, avg=2935.33, stdev=87.54, samples=9 00:38:49.696 lat (msec) : 2=0.19%, 4=99.36%, 10=0.39%, 50=0.05% 00:38:49.696 cpu : usr=96.40%, sys=3.38%, ctx=13, majf=0, minf=9 00:38:49.696 IO depths : 1=0.1%, 2=0.1%, 4=73.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:49.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.696 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.696 issued rwts: total=14695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:49.696 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:49.696 filename1: (groupid=0, jobs=1): err= 0: pid=3676119: Mon Oct 7 09:59:49 2024 00:38:49.696 read: IOPS=2931, BW=22.9MiB/s (24.0MB/s)(115MiB/5003msec) 00:38:49.696 slat (nsec): min=5456, max=30361, avg=6176.84, stdev=1834.36 00:38:49.696 clat (usec): min=1411, max=44215, avg=2712.12, stdev=988.64 00:38:49.696 lat (usec): min=1417, max=44240, avg=2718.30, stdev=988.77 00:38:49.696 clat percentiles (usec): 00:38:49.696 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:38:49.696 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:38:49.697 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2933], 00:38:49.697 | 99.00th=[ 3720], 99.50th=[ 3982], 99.90th=[ 4359], 99.95th=[44303], 00:38:49.697 | 99.99th=[44303] 00:38:49.697 bw ( KiB/s): min=21376, max=23760, per=24.65%, avg=23429.33, stdev=773.52, samples=9 00:38:49.697 iops : min= 2672, max= 2970, avg=2928.67, stdev=96.69, samples=9 00:38:49.697 lat (msec) : 2=0.25%, 4=99.34%, 10=0.36%, 50=0.05% 00:38:49.697 cpu : usr=94.92%, sys=4.12%, ctx=212, majf=0, minf=9 00:38:49.697 IO depths : 1=0.1%, 2=0.1%, 4=72.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:49.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.697 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.697 issued rwts: total=14665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:49.697 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:49.697 filename1: (groupid=0, jobs=1): err= 0: pid=3676120: Mon Oct 7 09:59:49 2024 00:38:49.697 read: IOPS=3020, BW=23.6MiB/s (24.7MB/s)(118MiB/5002msec) 00:38:49.697 slat (nsec): min=5456, max=54074, avg=6224.30, stdev=2171.81 00:38:49.697 clat (usec): min=924, max=4226, avg=2633.28, stdev=250.86 00:38:49.697 lat (usec): min=934, max=4232, avg=2639.50, stdev=250.62 00:38:49.697 clat percentiles (usec): 00:38:49.697 | 1.00th=[ 1549], 5.00th=[ 2245], 10.00th=[ 2442], 20.00th=[ 2606], 00:38:49.697 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:38:49.697 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2704], 95.00th=[ 2868], 00:38:49.697 | 99.00th=[ 3556], 99.50th=[ 3818], 99.90th=[ 4015], 99.95th=[ 4047], 00:38:49.697 | 99.99th=[ 4228] 00:38:49.697 bw ( KiB/s): min=23872, max=25808, per=25.46%, avg=24200.89, stdev=612.93, samples=9 00:38:49.697 iops : min= 2984, max= 3226, avg=3025.11, stdev=76.62, samples=9 00:38:49.697 lat (usec) : 1000=0.04% 00:38:49.697 lat (msec) : 2=2.30%, 4=97.54%, 10=0.12% 00:38:49.697 cpu : usr=96.00%, sys=3.76%, ctx=6, majf=0, minf=9 00:38:49.697 IO depths : 1=0.1%, 2=0.1%, 4=69.0%, 8=30.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:49.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.697 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.697 issued rwts: total=15108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:49.697 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:49.697 00:38:49.697 Run status group 0 (all jobs): 00:38:49.697 READ: bw=92.8MiB/s (97.3MB/s), 22.9MiB/s-23.6MiB/s (24.0MB/s-24.7MB/s), io=464MiB (487MB), run=5001-5003msec 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:49.697 00:38:49.697 real 0m24.676s 00:38:49.697 user 5m13.954s 00:38:49.697 sys 0m4.992s 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # xtrace_disable 00:38:49.697 09:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.697 ************************************ 00:38:49.697 END TEST fio_dif_rand_params 00:38:49.697 ************************************ 00:38:49.958 09:59:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:49.958 09:59:49 nvmf_dif -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:38:49.958 09:59:49 nvmf_dif -- common/autotest_common.sh@1110 -- # xtrace_disable 00:38:49.958 09:59:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:49.958 ************************************ 00:38:49.958 START TEST fio_dif_digest 00:38:49.958 ************************************ 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # fio_dif_digest 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.958 bdev_null0 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@564 -- # xtrace_disable 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.958 [2024-10-07 09:59:49.458280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:49.958 { 00:38:49.958 "params": { 00:38:49.958 "name": "Nvme$subsystem", 00:38:49.958 "trtype": "$TEST_TRANSPORT", 00:38:49.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:49.958 "adrfam": "ipv4", 00:38:49.958 "trsvcid": "$NVMF_PORT", 00:38:49.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:49.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:49.958 "hdgst": ${hdgst:-false}, 00:38:49.958 "ddgst": ${ddgst:-false} 00:38:49.958 }, 00:38:49.958 "method": "bdev_nvme_attach_controller" 00:38:49.958 } 00:38:49.958 EOF 00:38:49.958 )") 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1325 -- # local fio_dir=/usr/src/fio 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1327 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1327 -- # local sanitizers 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1328 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1329 -- # shift 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1331 -- # local asan_lib= 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:49.958 09:59:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # grep libasan 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:49.959 "params": { 00:38:49.959 "name": "Nvme0", 00:38:49.959 "trtype": "tcp", 00:38:49.959 "traddr": "10.0.0.2", 00:38:49.959 "adrfam": "ipv4", 00:38:49.959 "trsvcid": "4420", 00:38:49.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:49.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:49.959 "hdgst": true, 00:38:49.959 "ddgst": true 00:38:49.959 }, 00:38:49.959 "method": "bdev_nvme_attach_controller" 00:38:49.959 }' 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # grep libclang_rt.asan 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # asan_lib= 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # [[ -n '' ]] 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:49.959 09:59:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:50.527 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:50.527 ... 00:38:50.527 fio-3.35 00:38:50.527 Starting 3 threads 00:39:02.765 00:39:02.765 filename0: (groupid=0, jobs=1): err= 0: pid=3677506: Mon Oct 7 10:00:00 2024 00:39:02.765 read: IOPS=335, BW=41.9MiB/s (44.0MB/s)(421MiB/10047msec) 00:39:02.765 slat (nsec): min=5875, max=32155, avg=6661.24, stdev=970.60 00:39:02.765 clat (usec): min=5856, max=50824, avg=8922.69, stdev=1629.32 00:39:02.765 lat (usec): min=5862, max=50830, avg=8929.35, stdev=1629.36 00:39:02.765 clat percentiles (usec): 00:39:02.765 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7570], 00:39:02.765 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9372], 00:39:02.765 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945], 00:39:02.765 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12780], 99.95th=[46924], 00:39:02.765 | 99.99th=[50594] 00:39:02.765 bw ( KiB/s): min=39168, max=45312, per=39.28%, avg=43110.40, stdev=1628.43, samples=20 00:39:02.765 iops : min= 306, max= 354, avg=336.80, stdev=12.72, samples=20 00:39:02.765 lat (msec) : 10=76.11%, 20=23.83%, 50=0.03%, 100=0.03% 00:39:02.765 cpu : usr=94.07%, sys=5.70%, ctx=17, majf=0, minf=137 00:39:02.765 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:02.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.765 issued rwts: total=3370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.765 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:02.765 filename0: (groupid=0, jobs=1): err= 0: pid=3677507: Mon Oct 7 10:00:00 2024 00:39:02.765 read: IOPS=336, BW=42.0MiB/s (44.1MB/s)(422MiB/10046msec) 00:39:02.765 slat (nsec): min=5854, max=30741, avg=8390.98, stdev=1549.21 00:39:02.765 clat (usec): min=5603, max=51373, avg=8901.55, stdev=1714.94 00:39:02.765 lat (usec): min=5609, max=51379, avg=8909.94, stdev=1714.87 00:39:02.765 clat percentiles (usec): 00:39:02.765 | 1.00th=[ 6194], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 7439], 00:39:02.765 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:39:02.765 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10683], 95.00th=[11076], 00:39:02.765 | 99.00th=[11863], 99.50th=[12256], 99.90th=[12780], 99.95th=[45876], 00:39:02.765 | 99.99th=[51119] 00:39:02.765 bw ( KiB/s): min=40448, max=46080, per=39.36%, avg=43200.00, stdev=1635.78, samples=20 00:39:02.765 iops : min= 316, max= 360, avg=337.50, stdev=12.78, samples=20 00:39:02.765 lat (msec) : 10=74.06%, 20=25.88%, 50=0.03%, 100=0.03% 00:39:02.765 cpu : usr=94.60%, sys=5.15%, ctx=17, majf=0, minf=159 00:39:02.765 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:02.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.765 issued rwts: total=3377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.765 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:02.765 filename0: (groupid=0, jobs=1): err= 0: pid=3677508: Mon Oct 7 10:00:00 2024 00:39:02.765 read: IOPS=185, BW=23.2MiB/s (24.4MB/s)(233MiB/10046msec) 00:39:02.765 slat (nsec): min=5904, max=31564, avg=9221.00, stdev=1394.71 00:39:02.765 clat (usec): min=6478, max=93068, avg=16108.53, stdev=15675.18 00:39:02.765 lat (usec): min=6485, max=93077, avg=16117.75, stdev=15675.15 00:39:02.765 clat percentiles (usec): 00:39:02.765 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9241], 00:39:02.765 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:39:02.765 | 70.00th=[10552], 80.00th=[11076], 90.00th=[50070], 95.00th=[51119], 00:39:02.766 | 99.00th=[53740], 99.50th=[90702], 99.90th=[91751], 99.95th=[92799], 00:39:02.766 | 99.99th=[92799] 00:39:02.766 bw ( KiB/s): min=15616, max=33024, per=21.75%, avg=23872.00, stdev=4747.28, samples=20 00:39:02.766 iops : min= 122, max= 258, avg=186.50, stdev=37.09, samples=20 00:39:02.766 lat (msec) : 10=48.96%, 20=36.53%, 50=3.59%, 100=10.93% 00:39:02.766 cpu : usr=95.79%, sys=3.97%, ctx=41, majf=0, minf=134 00:39:02.766 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:02.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.766 issued rwts: total=1867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.766 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:02.766 00:39:02.766 Run status group 0 (all jobs): 00:39:02.766 READ: bw=107MiB/s (112MB/s), 23.2MiB/s-42.0MiB/s (24.4MB/s-44.1MB/s), io=1077MiB (1129MB), run=10046-10047msec 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@564 -- # xtrace_disable 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@564 -- # xtrace_disable 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:39:02.766 00:39:02.766 real 0m11.154s 00:39:02.766 user 0m44.741s 00:39:02.766 sys 0m1.811s 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # xtrace_disable 00:39:02.766 10:00:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:02.766 ************************************ 00:39:02.766 END TEST fio_dif_digest 00:39:02.766 ************************************ 00:39:02.766 10:00:00 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:02.766 10:00:00 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:02.766 rmmod nvme_tcp 00:39:02.766 rmmod nvme_fabrics 00:39:02.766 rmmod nvme_keyring 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 3667228 ']' 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 3667228 00:39:02.766 10:00:00 nvmf_dif -- common/autotest_common.sh@953 -- # '[' -z 3667228 ']' 00:39:02.766 10:00:00 nvmf_dif -- common/autotest_common.sh@957 -- # kill -0 3667228 00:39:02.766 10:00:00 nvmf_dif -- common/autotest_common.sh@958 -- # uname 00:39:02.766 10:00:00 nvmf_dif -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:39:02.766 10:00:00 nvmf_dif -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3667228 00:39:02.766 10:00:00 nvmf_dif -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:39:02.766 10:00:00 nvmf_dif -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:39:02.766 10:00:00 nvmf_dif -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3667228' 00:39:02.766 killing process with pid 3667228 00:39:02.766 10:00:00 nvmf_dif -- common/autotest_common.sh@972 -- # kill 3667228 00:39:02.766 10:00:00 nvmf_dif -- common/autotest_common.sh@977 -- # wait 3667228 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:39:02.766 10:00:00 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:04.685 Waiting for block devices as requested 00:39:04.946 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:04.946 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:04.946 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:05.207 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:05.207 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:05.207 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:05.467 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:05.467 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:05.467 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:05.728 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:05.728 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:05.989 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:05.989 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:05.989 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:06.249 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:06.249 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:06.249 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:06.510 10:00:06 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:06.510 10:00:06 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:06.510 10:00:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:06.510 10:00:06 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:39:06.510 10:00:06 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:06.510 10:00:06 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:39:06.510 10:00:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:06.510 10:00:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:06.510 10:00:06 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.510 10:00:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:06.510 10:00:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:09.080 10:00:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:09.080 00:39:09.080 real 1m19.146s 00:39:09.080 user 7m56.287s 00:39:09.080 sys 0m23.100s 00:39:09.080 10:00:08 nvmf_dif -- common/autotest_common.sh@1129 -- # xtrace_disable 00:39:09.080 10:00:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:09.080 ************************************ 00:39:09.080 END TEST nvmf_dif 00:39:09.080 ************************************ 00:39:09.080 10:00:08 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:09.080 10:00:08 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:39:09.080 10:00:08 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:39:09.080 10:00:08 -- common/autotest_common.sh@10 -- # set +x 00:39:09.080 ************************************ 00:39:09.080 START TEST nvmf_abort_qd_sizes 00:39:09.080 ************************************ 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:09.080 * Looking for test storage... 00:39:09.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1626 -- # lcov --version 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:39:09.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.080 --rc genhtml_branch_coverage=1 00:39:09.080 --rc genhtml_function_coverage=1 00:39:09.080 --rc genhtml_legend=1 00:39:09.080 --rc geninfo_all_blocks=1 00:39:09.080 --rc geninfo_unexecuted_blocks=1 00:39:09.080 00:39:09.080 ' 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:39:09.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.080 --rc genhtml_branch_coverage=1 00:39:09.080 --rc genhtml_function_coverage=1 00:39:09.080 --rc genhtml_legend=1 00:39:09.080 --rc geninfo_all_blocks=1 00:39:09.080 --rc geninfo_unexecuted_blocks=1 00:39:09.080 00:39:09.080 ' 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:39:09.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.080 --rc genhtml_branch_coverage=1 00:39:09.080 --rc genhtml_function_coverage=1 00:39:09.080 --rc genhtml_legend=1 00:39:09.080 --rc geninfo_all_blocks=1 00:39:09.080 --rc geninfo_unexecuted_blocks=1 00:39:09.080 00:39:09.080 ' 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:39:09.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.080 --rc genhtml_branch_coverage=1 00:39:09.080 --rc genhtml_function_coverage=1 00:39:09.080 --rc genhtml_legend=1 00:39:09.080 --rc geninfo_all_blocks=1 00:39:09.080 --rc geninfo_unexecuted_blocks=1 00:39:09.080 00:39:09.080 ' 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:09.080 10:00:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:09.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:09.081 10:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:17.372 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:17.372 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:17.372 Found net devices under 0000:31:00.0: cvl_0_0 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:17.372 Found net devices under 0000:31:00.1: cvl_0_1 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:17.372 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:17.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:17.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:39:17.372 00:39:17.372 --- 10.0.0.2 ping statistics --- 00:39:17.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.372 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:17.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:17.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:39:17.372 00:39:17.372 --- 10.0.0.1 ping statistics --- 00:39:17.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.372 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:39:17.372 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:20.680 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:20.680 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:20.941 10:00:20 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:20.941 10:00:20 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=3687617 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 3687617 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # '[' -z 3687617 ']' 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local max_retries=100 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@843 -- # xtrace_disable 00:39:20.942 10:00:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:20.942 [2024-10-07 10:00:20.516399] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:39:20.942 [2024-10-07 10:00:20.516457] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:21.203 [2024-10-07 10:00:20.608583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:21.203 [2024-10-07 10:00:20.706694] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:21.203 [2024-10-07 10:00:20.706753] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:21.203 [2024-10-07 10:00:20.706761] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:21.203 [2024-10-07 10:00:20.706769] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:21.203 [2024-10-07 10:00:20.706777] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:21.203 [2024-10-07 10:00:20.708848] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:21.203 [2024-10-07 10:00:20.709013] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:39:21.203 [2024-10-07 10:00:20.709152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:39:21.203 [2024-10-07 10:00:20.709153] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@867 -- # return 0 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@733 -- # xtrace_disable 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1110 -- # xtrace_disable 00:39:21.777 10:00:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:21.777 ************************************ 00:39:21.777 START TEST spdk_target_abort 00:39:21.777 ************************************ 00:39:21.777 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # spdk_target 00:39:22.039 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:22.039 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:39:22.039 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:39:22.039 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.300 spdk_targetn1 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.300 [2024-10-07 10:00:21.756827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.300 [2024-10-07 10:00:21.797295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:22.300 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:22.301 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:22.301 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:22.301 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:22.301 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:22.301 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:22.301 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:22.301 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:22.301 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:22.301 10:00:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:22.301 [2024-10-07 10:00:21.928288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:24 len:8 PRP1 0x2000078be000 PRP2 0x0 00:39:22.301 [2024-10-07 10:00:21.928336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0004 p:1 m:0 dnr:0 00:39:22.301 [2024-10-07 10:00:21.931165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:168 len:8 PRP1 0x2000078be000 PRP2 0x0 00:39:22.301 [2024-10-07 10:00:21.931199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0016 p:1 m:0 dnr:0 00:39:22.301 [2024-10-07 10:00:21.936180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:216 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:39:22.301 [2024-10-07 10:00:21.936213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:001e p:1 m:0 dnr:0 00:39:22.562 [2024-10-07 10:00:21.981165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1592 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:39:22.562 [2024-10-07 10:00:21.981202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00c9 p:1 m:0 dnr:0 00:39:22.562 [2024-10-07 10:00:21.981639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1608 len:8 PRP1 0x2000078be000 PRP2 0x0 00:39:22.562 [2024-10-07 10:00:21.981660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00cd p:1 m:0 dnr:0 00:39:22.562 [2024-10-07 10:00:22.011390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2520 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:39:22.562 [2024-10-07 10:00:22.011424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:39:22.562 [2024-10-07 10:00:22.024155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2832 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:39:22.562 [2024-10-07 10:00:22.024185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:39:22.562 [2024-10-07 10:00:22.045153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3456 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:39:22.562 [2024-10-07 10:00:22.045185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:39:22.562 [2024-10-07 10:00:22.045908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3512 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:39:22.562 [2024-10-07 10:00:22.045932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00ba p:0 m:0 dnr:0 00:39:22.562 [2024-10-07 10:00:22.061270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3952 len:8 PRP1 0x2000078be000 PRP2 0x0 00:39:22.562 [2024-10-07 10:00:22.061301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f2 p:0 m:0 dnr:0 00:39:25.869 Initializing NVMe Controllers 00:39:25.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:25.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:25.870 Initialization complete. Launching workers. 00:39:25.870 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9821, failed: 10 00:39:25.870 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1999, failed to submit 7832 00:39:25.870 success 726, unsuccessful 1273, failed 0 00:39:25.870 10:00:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:25.870 10:00:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:25.870 [2024-10-07 10:00:25.256836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:1040 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:39:25.870 [2024-10-07 10:00:25.256877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:008b p:1 m:0 dnr:0 00:39:25.870 [2024-10-07 10:00:25.272646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:1424 len:8 PRP1 0x200007c54000 PRP2 0x0 00:39:25.870 [2024-10-07 10:00:25.272670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:39:25.870 [2024-10-07 10:00:25.291842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:1984 len:8 PRP1 0x200007c54000 PRP2 0x0 00:39:25.870 [2024-10-07 10:00:25.291865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0000 p:1 m:0 dnr:0 00:39:25.870 [2024-10-07 10:00:25.326752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:2720 len:8 PRP1 0x200007c42000 PRP2 0x0 00:39:25.870 [2024-10-07 10:00:25.326774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:39:25.870 [2024-10-07 10:00:25.337686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:2864 len:8 PRP1 0x200007c56000 PRP2 0x0 00:39:25.870 [2024-10-07 10:00:25.337708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:29.171 Initializing NVMe Controllers 00:39:29.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:29.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:29.171 Initialization complete. Launching workers. 00:39:29.171 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8588, failed: 5 00:39:29.171 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1222, failed to submit 7371 00:39:29.171 success 356, unsuccessful 866, failed 0 00:39:29.171 10:00:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:29.172 10:00:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:29.172 [2024-10-07 10:00:28.604367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:151 nsid:1 lba:1984 len:8 PRP1 0x2000078f8000 PRP2 0x0 00:39:29.172 [2024-10-07 10:00:28.604396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:151 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:39:29.744 [2024-10-07 10:00:29.351411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:171 nsid:1 lba:89736 len:8 PRP1 0x2000078f8000 PRP2 0x0 00:39:29.744 [2024-10-07 10:00:29.351435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:171 cdw0:0 sqhd:0044 p:1 m:0 dnr:0 00:39:32.287 Initializing NVMe Controllers 00:39:32.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:32.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:32.287 Initialization complete. Launching workers. 00:39:32.287 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44048, failed: 2 00:39:32.287 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2661, failed to submit 41389 00:39:32.287 success 614, unsuccessful 2047, failed 0 00:39:32.287 10:00:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:32.287 10:00:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:39:32.287 10:00:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:32.287 10:00:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:39:32.287 10:00:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:32.287 10:00:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@564 -- # xtrace_disable 00:39:32.287 10:00:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3687617 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' -z 3687617 ']' 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # kill -0 3687617 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # uname 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3687617 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3687617' 00:39:34.215 killing process with pid 3687617 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # kill 3687617 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@977 -- # wait 3687617 00:39:34.215 00:39:34.215 real 0m12.228s 00:39:34.215 user 0m49.617s 00:39:34.215 sys 0m2.064s 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # xtrace_disable 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:34.215 ************************************ 00:39:34.215 END TEST spdk_target_abort 00:39:34.215 ************************************ 00:39:34.215 10:00:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:34.215 10:00:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:39:34.215 10:00:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1110 -- # xtrace_disable 00:39:34.215 10:00:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:34.215 ************************************ 00:39:34.215 START TEST kernel_target_abort 00:39:34.215 ************************************ 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # kernel_target 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:34.215 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:34.216 10:00:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:38.417 Waiting for block devices as requested 00:39:38.417 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:38.417 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:38.417 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:38.417 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:38.417 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:38.417 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:38.417 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:38.417 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:38.417 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:38.677 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:38.677 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:38.677 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:38.936 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:38.936 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:38.936 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:39.196 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:39.196 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1593 -- # local device=nvme0n1 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1595 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1596 -- # [[ none != none ]] 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:39.457 No valid GPT data, bailing 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:39.457 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:39:39.718 00:39:39.718 Discovery Log Number of Records 2, Generation counter 2 00:39:39.718 =====Discovery Log Entry 0====== 00:39:39.718 trtype: tcp 00:39:39.718 adrfam: ipv4 00:39:39.718 subtype: current discovery subsystem 00:39:39.718 treq: not specified, sq flow control disable supported 00:39:39.718 portid: 1 00:39:39.718 trsvcid: 4420 00:39:39.718 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:39.718 traddr: 10.0.0.1 00:39:39.718 eflags: none 00:39:39.718 sectype: none 00:39:39.718 =====Discovery Log Entry 1====== 00:39:39.718 trtype: tcp 00:39:39.718 adrfam: ipv4 00:39:39.718 subtype: nvme subsystem 00:39:39.718 treq: not specified, sq flow control disable supported 00:39:39.718 portid: 1 00:39:39.718 trsvcid: 4420 00:39:39.718 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:39.718 traddr: 10.0.0.1 00:39:39.718 eflags: none 00:39:39.718 sectype: none 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:39.718 10:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:43.015 Initializing NVMe Controllers 00:39:43.015 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:43.015 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:43.015 Initialization complete. Launching workers. 00:39:43.015 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67476, failed: 0 00:39:43.015 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67476, failed to submit 0 00:39:43.015 success 0, unsuccessful 67476, failed 0 00:39:43.015 10:00:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:43.015 10:00:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:46.317 Initializing NVMe Controllers 00:39:46.317 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:46.317 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:46.317 Initialization complete. Launching workers. 00:39:46.317 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 114628, failed: 0 00:39:46.317 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28858, failed to submit 85770 00:39:46.317 success 0, unsuccessful 28858, failed 0 00:39:46.317 10:00:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:46.317 10:00:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:49.617 Initializing NVMe Controllers 00:39:49.617 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:49.617 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:49.617 Initialization complete. Launching workers. 00:39:49.617 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 147081, failed: 0 00:39:49.617 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36822, failed to submit 110259 00:39:49.618 success 0, unsuccessful 36822, failed 0 00:39:49.618 10:00:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:49.618 10:00:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:49.618 10:00:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:39:49.618 10:00:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:49.618 10:00:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:49.618 10:00:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:49.618 10:00:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:49.618 10:00:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:39:49.618 10:00:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:39:49.618 10:00:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:52.919 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:52.919 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:54.831 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:54.831 00:39:54.831 real 0m20.603s 00:39:54.831 user 0m9.994s 00:39:54.831 sys 0m6.225s 00:39:54.831 10:00:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # xtrace_disable 00:39:54.831 10:00:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:54.831 ************************************ 00:39:54.831 END TEST kernel_target_abort 00:39:54.831 ************************************ 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:54.831 rmmod nvme_tcp 00:39:54.831 rmmod nvme_fabrics 00:39:54.831 rmmod nvme_keyring 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 3687617 ']' 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 3687617 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # '[' -z 3687617 ']' 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@957 -- # kill -0 3687617 00:39:54.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 957: kill: (3687617) - No such process 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@980 -- # echo 'Process with pid 3687617 is not found' 00:39:54.831 Process with pid 3687617 is not found 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:39:54.831 10:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:59.038 Waiting for block devices as requested 00:39:59.038 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:59.038 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:59.038 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:59.038 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:59.038 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:59.038 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:59.038 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:59.038 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:59.038 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:59.298 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:59.298 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:59.298 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:59.557 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:59.557 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:59.557 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:59.819 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:59.819 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:00.082 10:00:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.627 10:01:01 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:02.627 00:40:02.627 real 0m53.371s 00:40:02.627 user 1m5.247s 00:40:02.627 sys 0m19.794s 00:40:02.627 10:01:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # xtrace_disable 00:40:02.627 10:01:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:02.627 ************************************ 00:40:02.627 END TEST nvmf_abort_qd_sizes 00:40:02.627 ************************************ 00:40:02.627 10:01:01 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:02.627 10:01:01 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:40:02.627 10:01:01 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:40:02.627 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:40:02.627 ************************************ 00:40:02.627 START TEST keyring_file 00:40:02.627 ************************************ 00:40:02.627 10:01:01 keyring_file -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:02.627 * Looking for test storage... 00:40:02.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:02.627 10:01:01 keyring_file -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:40:02.627 10:01:01 keyring_file -- common/autotest_common.sh@1626 -- # lcov --version 00:40:02.627 10:01:01 keyring_file -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:40:02.627 10:01:01 keyring_file -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@345 -- # : 1 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:40:02.627 10:01:01 keyring_file -- scripts/common.sh@353 -- # local d=1 00:40:02.628 10:01:01 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:02.628 10:01:01 keyring_file -- scripts/common.sh@355 -- # echo 1 00:40:02.628 10:01:01 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:40:02.628 10:01:01 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:40:02.628 10:01:01 keyring_file -- scripts/common.sh@353 -- # local d=2 00:40:02.628 10:01:01 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:02.628 10:01:01 keyring_file -- scripts/common.sh@355 -- # echo 2 00:40:02.628 10:01:02 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:40:02.628 10:01:02 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:02.628 10:01:02 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:02.628 10:01:02 keyring_file -- scripts/common.sh@368 -- # return 0 00:40:02.628 10:01:02 keyring_file -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:02.628 10:01:02 keyring_file -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:40:02.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.628 --rc genhtml_branch_coverage=1 00:40:02.628 --rc genhtml_function_coverage=1 00:40:02.628 --rc genhtml_legend=1 00:40:02.628 --rc geninfo_all_blocks=1 00:40:02.628 --rc geninfo_unexecuted_blocks=1 00:40:02.628 00:40:02.628 ' 00:40:02.628 10:01:02 keyring_file -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:40:02.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.628 --rc genhtml_branch_coverage=1 00:40:02.628 --rc genhtml_function_coverage=1 00:40:02.628 --rc genhtml_legend=1 00:40:02.628 --rc geninfo_all_blocks=1 00:40:02.628 --rc geninfo_unexecuted_blocks=1 00:40:02.628 00:40:02.628 ' 00:40:02.628 10:01:02 keyring_file -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:40:02.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.628 --rc genhtml_branch_coverage=1 00:40:02.628 --rc genhtml_function_coverage=1 00:40:02.628 --rc genhtml_legend=1 00:40:02.628 --rc geninfo_all_blocks=1 00:40:02.628 --rc geninfo_unexecuted_blocks=1 00:40:02.628 00:40:02.628 ' 00:40:02.628 10:01:02 keyring_file -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:40:02.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.628 --rc genhtml_branch_coverage=1 00:40:02.628 --rc genhtml_function_coverage=1 00:40:02.628 --rc genhtml_legend=1 00:40:02.628 --rc geninfo_all_blocks=1 00:40:02.628 --rc geninfo_unexecuted_blocks=1 00:40:02.628 00:40:02.628 ' 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.628 10:01:02 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:40:02.628 10:01:02 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.628 10:01:02 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.628 10:01:02 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.628 10:01:02 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.628 10:01:02 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.628 10:01:02 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.628 10:01:02 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:02.628 10:01:02 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:02.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AgB0l3Wqtk 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@731 -- # python - 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AgB0l3Wqtk 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AgB0l3Wqtk 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AgB0l3Wqtk 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HFxUh9Waze 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:40:02.628 10:01:02 keyring_file -- nvmf/common.sh@731 -- # python - 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HFxUh9Waze 00:40:02.628 10:01:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HFxUh9Waze 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.HFxUh9Waze 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=3698228 00:40:02.628 10:01:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3698228 00:40:02.629 10:01:02 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:02.629 10:01:02 keyring_file -- common/autotest_common.sh@834 -- # '[' -z 3698228 ']' 00:40:02.629 10:01:02 keyring_file -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:02.629 10:01:02 keyring_file -- common/autotest_common.sh@839 -- # local max_retries=100 00:40:02.629 10:01:02 keyring_file -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:02.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:02.629 10:01:02 keyring_file -- common/autotest_common.sh@843 -- # xtrace_disable 00:40:02.629 10:01:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:02.629 [2024-10-07 10:01:02.208447] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:40:02.629 [2024-10-07 10:01:02.208518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3698228 ] 00:40:02.889 [2024-10-07 10:01:02.290625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.889 [2024-10-07 10:01:02.364937] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@867 -- # return 0 00:40:03.460 10:01:03 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@564 -- # xtrace_disable 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:03.460 [2024-10-07 10:01:03.015065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:03.460 null0 00:40:03.460 [2024-10-07 10:01:03.047109] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:03.460 [2024-10-07 10:01:03.047549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:40:03.460 10:01:03 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@653 -- # local es=0 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@656 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@564 -- # xtrace_disable 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:03.460 [2024-10-07 10:01:03.079177] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:03.460 request: 00:40:03.460 { 00:40:03.460 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:03.460 "secure_channel": false, 00:40:03.460 "listen_address": { 00:40:03.460 "trtype": "tcp", 00:40:03.460 "traddr": "127.0.0.1", 00:40:03.460 "trsvcid": "4420" 00:40:03.460 }, 00:40:03.460 "method": "nvmf_subsystem_add_listener", 00:40:03.460 "req_id": 1 00:40:03.460 } 00:40:03.460 Got JSON-RPC error response 00:40:03.460 response: 00:40:03.460 { 00:40:03.460 "code": -32602, 00:40:03.460 "message": "Invalid parameters" 00:40:03.460 } 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@656 -- # es=1 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:40:03.460 10:01:03 keyring_file -- keyring/file.sh@47 -- # bperfpid=3698242 00:40:03.460 10:01:03 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3698242 /var/tmp/bperf.sock 00:40:03.460 10:01:03 keyring_file -- common/autotest_common.sh@834 -- # '[' -z 3698242 ']' 00:40:03.461 10:01:03 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:03.461 10:01:03 keyring_file -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:03.461 10:01:03 keyring_file -- common/autotest_common.sh@839 -- # local max_retries=100 00:40:03.461 10:01:03 keyring_file -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:03.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:03.461 10:01:03 keyring_file -- common/autotest_common.sh@843 -- # xtrace_disable 00:40:03.461 10:01:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:03.721 [2024-10-07 10:01:03.140170] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:40:03.721 [2024-10-07 10:01:03.140236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3698242 ] 00:40:03.721 [2024-10-07 10:01:03.223138] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:03.721 [2024-10-07 10:01:03.304010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:04.292 10:01:03 keyring_file -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:40:04.292 10:01:03 keyring_file -- common/autotest_common.sh@867 -- # return 0 00:40:04.292 10:01:03 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AgB0l3Wqtk 00:40:04.292 10:01:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AgB0l3Wqtk 00:40:04.552 10:01:04 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HFxUh9Waze 00:40:04.552 10:01:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HFxUh9Waze 00:40:04.813 10:01:04 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:40:04.813 10:01:04 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:04.813 10:01:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:04.813 10:01:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:04.813 10:01:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:04.813 10:01:04 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AgB0l3Wqtk == \/\t\m\p\/\t\m\p\.\A\g\B\0\l\3\W\q\t\k ]] 00:40:04.813 10:01:04 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:40:04.813 10:01:04 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:40:04.813 10:01:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:04.813 10:01:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:04.813 10:01:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.073 10:01:04 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.HFxUh9Waze == \/\t\m\p\/\t\m\p\.\H\F\x\U\h\9\W\a\z\e ]] 00:40:05.073 10:01:04 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:40:05.073 10:01:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:05.073 10:01:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:05.073 10:01:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.073 10:01:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:05.073 10:01:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.334 10:01:04 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:05.334 10:01:04 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:40:05.334 10:01:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:05.334 10:01:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:05.334 10:01:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.334 10:01:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.334 10:01:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:05.595 10:01:05 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:40:05.595 10:01:05 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:05.595 10:01:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:05.595 [2024-10-07 10:01:05.187628] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:05.595 nvme0n1 00:40:05.857 10:01:05 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:40:05.857 10:01:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:05.857 10:01:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:05.857 10:01:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.857 10:01:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:05.857 10:01:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.857 10:01:05 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:40:05.857 10:01:05 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:40:05.857 10:01:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:05.857 10:01:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:05.857 10:01:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.857 10:01:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:05.857 10:01:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:06.116 10:01:05 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:40:06.117 10:01:05 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:06.117 Running I/O for 1 seconds... 00:40:07.500 19678.00 IOPS, 76.87 MiB/s 00:40:07.500 Latency(us) 00:40:07.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:07.500 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:07.500 nvme0n1 : 1.00 19733.29 77.08 0.00 0.00 6474.71 2252.80 18131.63 00:40:07.500 =================================================================================================================== 00:40:07.500 Total : 19733.29 77.08 0.00 0.00 6474.71 2252.80 18131.63 00:40:07.500 { 00:40:07.500 "results": [ 00:40:07.500 { 00:40:07.500 "job": "nvme0n1", 00:40:07.500 "core_mask": "0x2", 00:40:07.500 "workload": "randrw", 00:40:07.500 "percentage": 50, 00:40:07.500 "status": "finished", 00:40:07.500 "queue_depth": 128, 00:40:07.500 "io_size": 4096, 00:40:07.500 "runtime": 1.003786, 00:40:07.500 "iops": 19733.2897649499, 00:40:07.500 "mibps": 77.08316314433554, 00:40:07.500 "io_failed": 0, 00:40:07.500 "io_timeout": 0, 00:40:07.500 "avg_latency_us": 6474.711470113085, 00:40:07.500 "min_latency_us": 2252.8, 00:40:07.500 "max_latency_us": 18131.626666666667 00:40:07.500 } 00:40:07.500 ], 00:40:07.500 "core_count": 1 00:40:07.500 } 00:40:07.500 10:01:06 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:07.500 10:01:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:07.500 10:01:06 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:40:07.500 10:01:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:07.500 10:01:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:07.500 10:01:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:07.500 10:01:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:07.500 10:01:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:07.500 10:01:07 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:07.500 10:01:07 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:40:07.500 10:01:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:07.500 10:01:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:07.500 10:01:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:07.500 10:01:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:07.500 10:01:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:07.760 10:01:07 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:07.760 10:01:07 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:07.760 10:01:07 keyring_file -- common/autotest_common.sh@653 -- # local es=0 00:40:07.761 10:01:07 keyring_file -- common/autotest_common.sh@655 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:07.761 10:01:07 keyring_file -- common/autotest_common.sh@641 -- # local arg=bperf_cmd 00:40:07.761 10:01:07 keyring_file -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:40:07.761 10:01:07 keyring_file -- common/autotest_common.sh@645 -- # type -t bperf_cmd 00:40:07.761 10:01:07 keyring_file -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:40:07.761 10:01:07 keyring_file -- common/autotest_common.sh@656 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:07.761 10:01:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:08.021 [2024-10-07 10:01:07.469969] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:08.021 [2024-10-07 10:01:07.470690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1bc20 (107): Transport endpoint is not connected 00:40:08.021 [2024-10-07 10:01:07.471686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1bc20 (9): Bad file descriptor 00:40:08.021 [2024-10-07 10:01:07.472688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:08.021 [2024-10-07 10:01:07.472695] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:08.021 [2024-10-07 10:01:07.472701] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:08.021 [2024-10-07 10:01:07.472708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:08.021 request: 00:40:08.021 { 00:40:08.021 "name": "nvme0", 00:40:08.021 "trtype": "tcp", 00:40:08.021 "traddr": "127.0.0.1", 00:40:08.021 "adrfam": "ipv4", 00:40:08.021 "trsvcid": "4420", 00:40:08.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:08.021 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:08.021 "prchk_reftag": false, 00:40:08.021 "prchk_guard": false, 00:40:08.021 "hdgst": false, 00:40:08.021 "ddgst": false, 00:40:08.021 "psk": "key1", 00:40:08.021 "allow_unrecognized_csi": false, 00:40:08.021 "method": "bdev_nvme_attach_controller", 00:40:08.021 "req_id": 1 00:40:08.021 } 00:40:08.021 Got JSON-RPC error response 00:40:08.021 response: 00:40:08.021 { 00:40:08.021 "code": -5, 00:40:08.021 "message": "Input/output error" 00:40:08.021 } 00:40:08.021 10:01:07 keyring_file -- common/autotest_common.sh@656 -- # es=1 00:40:08.021 10:01:07 keyring_file -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:40:08.021 10:01:07 keyring_file -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:40:08.021 10:01:07 keyring_file -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:40:08.021 10:01:07 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:08.022 10:01:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:08.022 10:01:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.022 10:01:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.022 10:01:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:08.022 10:01:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.282 10:01:07 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:08.282 10:01:07 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:08.282 10:01:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.282 10:01:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:08.282 10:01:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.282 10:01:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:08.282 10:01:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.282 10:01:07 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:08.282 10:01:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:08.282 10:01:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:08.543 10:01:08 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:08.543 10:01:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:08.804 10:01:08 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:08.804 10:01:08 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:08.804 10:01:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.804 10:01:08 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:08.804 10:01:08 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.AgB0l3Wqtk 00:40:08.804 10:01:08 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AgB0l3Wqtk 00:40:08.804 10:01:08 keyring_file -- common/autotest_common.sh@653 -- # local es=0 00:40:08.804 10:01:08 keyring_file -- common/autotest_common.sh@655 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AgB0l3Wqtk 00:40:08.804 10:01:08 keyring_file -- common/autotest_common.sh@641 -- # local arg=bperf_cmd 00:40:08.804 10:01:08 keyring_file -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:40:08.804 10:01:08 keyring_file -- common/autotest_common.sh@645 -- # type -t bperf_cmd 00:40:08.804 10:01:08 keyring_file -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:40:08.804 10:01:08 keyring_file -- common/autotest_common.sh@656 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AgB0l3Wqtk 00:40:08.804 10:01:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AgB0l3Wqtk 00:40:09.063 [2024-10-07 10:01:08.565520] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AgB0l3Wqtk': 0100660 00:40:09.063 [2024-10-07 10:01:08.565540] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:09.063 request: 00:40:09.063 { 00:40:09.063 "name": "key0", 00:40:09.063 "path": "/tmp/tmp.AgB0l3Wqtk", 00:40:09.063 "method": "keyring_file_add_key", 00:40:09.063 "req_id": 1 00:40:09.063 } 00:40:09.063 Got JSON-RPC error response 00:40:09.063 response: 00:40:09.063 { 00:40:09.063 "code": -1, 00:40:09.063 "message": "Operation not permitted" 00:40:09.063 } 00:40:09.063 10:01:08 keyring_file -- common/autotest_common.sh@656 -- # es=1 00:40:09.063 10:01:08 keyring_file -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:40:09.063 10:01:08 keyring_file -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:40:09.063 10:01:08 keyring_file -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:40:09.063 10:01:08 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.AgB0l3Wqtk 00:40:09.063 10:01:08 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AgB0l3Wqtk 00:40:09.063 10:01:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AgB0l3Wqtk 00:40:09.323 10:01:08 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.AgB0l3Wqtk 00:40:09.323 10:01:08 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:09.323 10:01:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:09.323 10:01:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:09.323 10:01:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:09.323 10:01:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:09.323 10:01:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:09.323 10:01:08 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:09.323 10:01:08 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:09.323 10:01:08 keyring_file -- common/autotest_common.sh@653 -- # local es=0 00:40:09.323 10:01:08 keyring_file -- common/autotest_common.sh@655 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:09.323 10:01:08 keyring_file -- common/autotest_common.sh@641 -- # local arg=bperf_cmd 00:40:09.323 10:01:08 keyring_file -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:40:09.323 10:01:08 keyring_file -- common/autotest_common.sh@645 -- # type -t bperf_cmd 00:40:09.323 10:01:08 keyring_file -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:40:09.323 10:01:08 keyring_file -- common/autotest_common.sh@656 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:09.323 10:01:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:09.583 [2024-10-07 10:01:09.090857] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AgB0l3Wqtk': No such file or directory 00:40:09.583 [2024-10-07 10:01:09.090871] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:09.583 [2024-10-07 10:01:09.090884] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:09.583 [2024-10-07 10:01:09.090889] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:09.583 [2024-10-07 10:01:09.090895] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:09.583 [2024-10-07 10:01:09.090900] bdev_nvme.c:6449:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:09.583 request: 00:40:09.583 { 00:40:09.583 "name": "nvme0", 00:40:09.583 "trtype": "tcp", 00:40:09.583 "traddr": "127.0.0.1", 00:40:09.583 "adrfam": "ipv4", 00:40:09.583 "trsvcid": "4420", 00:40:09.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:09.583 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:09.583 "prchk_reftag": false, 00:40:09.583 "prchk_guard": false, 00:40:09.583 "hdgst": false, 00:40:09.583 "ddgst": false, 00:40:09.583 "psk": "key0", 00:40:09.583 "allow_unrecognized_csi": false, 00:40:09.583 "method": "bdev_nvme_attach_controller", 00:40:09.583 "req_id": 1 00:40:09.583 } 00:40:09.583 Got JSON-RPC error response 00:40:09.583 response: 00:40:09.583 { 00:40:09.583 "code": -19, 00:40:09.583 "message": "No such device" 00:40:09.583 } 00:40:09.583 10:01:09 keyring_file -- common/autotest_common.sh@656 -- # es=1 00:40:09.583 10:01:09 keyring_file -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:40:09.583 10:01:09 keyring_file -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:40:09.583 10:01:09 keyring_file -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:40:09.583 10:01:09 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:09.583 10:01:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:09.848 10:01:09 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:09.848 10:01:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:09.848 10:01:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:09.848 10:01:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:09.848 10:01:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:09.848 10:01:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:09.848 10:01:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.U3bJ6FWjsL 00:40:09.848 10:01:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:09.848 10:01:09 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:09.849 10:01:09 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:40:09.849 10:01:09 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:40:09.849 10:01:09 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:40:09.849 10:01:09 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:40:09.849 10:01:09 keyring_file -- nvmf/common.sh@731 -- # python - 00:40:09.849 10:01:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.U3bJ6FWjsL 00:40:09.849 10:01:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.U3bJ6FWjsL 00:40:09.849 10:01:09 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.U3bJ6FWjsL 00:40:09.849 10:01:09 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.U3bJ6FWjsL 00:40:09.849 10:01:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.U3bJ6FWjsL 00:40:09.849 10:01:09 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:09.849 10:01:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:10.219 nvme0n1 00:40:10.219 10:01:09 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:10.219 10:01:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:10.219 10:01:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:10.219 10:01:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:10.219 10:01:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:10.219 10:01:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:10.528 10:01:09 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:40:10.528 10:01:09 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:40:10.528 10:01:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:10.528 10:01:10 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:40:10.528 10:01:10 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:40:10.528 10:01:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:10.528 10:01:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:10.528 10:01:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:10.789 10:01:10 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:40:10.789 10:01:10 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:40:10.789 10:01:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:10.789 10:01:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:10.789 10:01:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:10.789 10:01:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:10.789 10:01:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.052 10:01:10 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:40:11.052 10:01:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:11.052 10:01:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:11.052 10:01:10 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:40:11.052 10:01:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.052 10:01:10 keyring_file -- keyring/file.sh@105 -- # jq length 00:40:11.313 10:01:10 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:40:11.313 10:01:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.U3bJ6FWjsL 00:40:11.313 10:01:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.U3bJ6FWjsL 00:40:11.575 10:01:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HFxUh9Waze 00:40:11.575 10:01:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HFxUh9Waze 00:40:11.575 10:01:11 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:11.575 10:01:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:11.836 nvme0n1 00:40:11.836 10:01:11 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:11.836 10:01:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:12.098 10:01:11 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:12.098 "subsystems": [ 00:40:12.098 { 00:40:12.098 "subsystem": "keyring", 00:40:12.098 "config": [ 00:40:12.098 { 00:40:12.098 "method": "keyring_file_add_key", 00:40:12.098 "params": { 00:40:12.098 "name": "key0", 00:40:12.098 "path": "/tmp/tmp.U3bJ6FWjsL" 00:40:12.098 } 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "method": "keyring_file_add_key", 00:40:12.098 "params": { 00:40:12.098 "name": "key1", 00:40:12.098 "path": "/tmp/tmp.HFxUh9Waze" 00:40:12.098 } 00:40:12.098 } 00:40:12.098 ] 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "subsystem": "iobuf", 00:40:12.098 "config": [ 00:40:12.098 { 00:40:12.098 "method": "iobuf_set_options", 00:40:12.098 "params": { 00:40:12.098 "small_pool_count": 8192, 00:40:12.098 "large_pool_count": 1024, 00:40:12.098 "small_bufsize": 8192, 00:40:12.098 "large_bufsize": 135168 00:40:12.098 } 00:40:12.098 } 00:40:12.098 ] 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "subsystem": "sock", 00:40:12.098 "config": [ 00:40:12.098 { 00:40:12.098 "method": "sock_set_default_impl", 00:40:12.098 "params": { 00:40:12.098 "impl_name": "posix" 00:40:12.098 } 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "method": "sock_impl_set_options", 00:40:12.098 "params": { 00:40:12.098 "impl_name": "ssl", 00:40:12.098 "recv_buf_size": 4096, 00:40:12.098 "send_buf_size": 4096, 00:40:12.098 "enable_recv_pipe": true, 00:40:12.098 "enable_quickack": false, 00:40:12.098 "enable_placement_id": 0, 00:40:12.098 "enable_zerocopy_send_server": true, 00:40:12.098 "enable_zerocopy_send_client": false, 00:40:12.098 "zerocopy_threshold": 0, 00:40:12.098 "tls_version": 0, 00:40:12.098 "enable_ktls": false 00:40:12.098 } 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "method": "sock_impl_set_options", 00:40:12.098 "params": { 00:40:12.098 "impl_name": "posix", 00:40:12.098 "recv_buf_size": 2097152, 00:40:12.098 "send_buf_size": 2097152, 00:40:12.098 "enable_recv_pipe": true, 00:40:12.098 "enable_quickack": false, 00:40:12.098 "enable_placement_id": 0, 00:40:12.098 "enable_zerocopy_send_server": true, 00:40:12.098 "enable_zerocopy_send_client": false, 00:40:12.098 "zerocopy_threshold": 0, 00:40:12.098 "tls_version": 0, 00:40:12.098 "enable_ktls": false 00:40:12.098 } 00:40:12.098 } 00:40:12.098 ] 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "subsystem": "vmd", 00:40:12.098 "config": [] 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "subsystem": "accel", 00:40:12.098 "config": [ 00:40:12.098 { 00:40:12.098 "method": "accel_set_options", 00:40:12.098 "params": { 00:40:12.098 "small_cache_size": 128, 00:40:12.098 "large_cache_size": 16, 00:40:12.098 "task_count": 2048, 00:40:12.098 "sequence_count": 2048, 00:40:12.098 "buf_count": 2048 00:40:12.098 } 00:40:12.098 } 00:40:12.098 ] 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "subsystem": "bdev", 00:40:12.098 "config": [ 00:40:12.098 { 00:40:12.098 "method": "bdev_set_options", 00:40:12.098 "params": { 00:40:12.098 "bdev_io_pool_size": 65535, 00:40:12.098 "bdev_io_cache_size": 256, 00:40:12.098 "bdev_auto_examine": true, 00:40:12.098 "iobuf_small_cache_size": 128, 00:40:12.098 "iobuf_large_cache_size": 16 00:40:12.098 } 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "method": "bdev_raid_set_options", 00:40:12.098 "params": { 00:40:12.098 "process_window_size_kb": 1024, 00:40:12.098 "process_max_bandwidth_mb_sec": 0 00:40:12.098 } 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "method": "bdev_iscsi_set_options", 00:40:12.098 "params": { 00:40:12.098 "timeout_sec": 30 00:40:12.098 } 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "method": "bdev_nvme_set_options", 00:40:12.098 "params": { 00:40:12.098 "action_on_timeout": "none", 00:40:12.098 "timeout_us": 0, 00:40:12.098 "timeout_admin_us": 0, 00:40:12.098 "keep_alive_timeout_ms": 10000, 00:40:12.098 "arbitration_burst": 0, 00:40:12.098 "low_priority_weight": 0, 00:40:12.098 "medium_priority_weight": 0, 00:40:12.098 "high_priority_weight": 0, 00:40:12.098 "nvme_adminq_poll_period_us": 10000, 00:40:12.098 "nvme_ioq_poll_period_us": 0, 00:40:12.098 "io_queue_requests": 512, 00:40:12.098 "delay_cmd_submit": true, 00:40:12.098 "transport_retry_count": 4, 00:40:12.098 "bdev_retry_count": 3, 00:40:12.098 "transport_ack_timeout": 0, 00:40:12.098 "ctrlr_loss_timeout_sec": 0, 00:40:12.098 "reconnect_delay_sec": 0, 00:40:12.098 "fast_io_fail_timeout_sec": 0, 00:40:12.098 "disable_auto_failback": false, 00:40:12.098 "generate_uuids": false, 00:40:12.098 "transport_tos": 0, 00:40:12.098 "nvme_error_stat": false, 00:40:12.098 "rdma_srq_size": 0, 00:40:12.098 "io_path_stat": false, 00:40:12.098 "allow_accel_sequence": false, 00:40:12.098 "rdma_max_cq_size": 0, 00:40:12.098 "rdma_cm_event_timeout_ms": 0, 00:40:12.098 "dhchap_digests": [ 00:40:12.098 "sha256", 00:40:12.098 "sha384", 00:40:12.098 "sha512" 00:40:12.098 ], 00:40:12.098 "dhchap_dhgroups": [ 00:40:12.098 "null", 00:40:12.098 "ffdhe2048", 00:40:12.098 "ffdhe3072", 00:40:12.098 "ffdhe4096", 00:40:12.098 "ffdhe6144", 00:40:12.098 "ffdhe8192" 00:40:12.098 ] 00:40:12.098 } 00:40:12.098 }, 00:40:12.098 { 00:40:12.098 "method": "bdev_nvme_attach_controller", 00:40:12.098 "params": { 00:40:12.098 "name": "nvme0", 00:40:12.098 "trtype": "TCP", 00:40:12.098 "adrfam": "IPv4", 00:40:12.098 "traddr": "127.0.0.1", 00:40:12.099 "trsvcid": "4420", 00:40:12.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:12.099 "prchk_reftag": false, 00:40:12.099 "prchk_guard": false, 00:40:12.099 "ctrlr_loss_timeout_sec": 0, 00:40:12.099 "reconnect_delay_sec": 0, 00:40:12.099 "fast_io_fail_timeout_sec": 0, 00:40:12.099 "psk": "key0", 00:40:12.099 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:12.099 "hdgst": false, 00:40:12.099 "ddgst": false 00:40:12.099 } 00:40:12.099 }, 00:40:12.099 { 00:40:12.099 "method": "bdev_nvme_set_hotplug", 00:40:12.099 "params": { 00:40:12.099 "period_us": 100000, 00:40:12.099 "enable": false 00:40:12.099 } 00:40:12.099 }, 00:40:12.099 { 00:40:12.099 "method": "bdev_wait_for_examine" 00:40:12.099 } 00:40:12.099 ] 00:40:12.099 }, 00:40:12.099 { 00:40:12.099 "subsystem": "nbd", 00:40:12.099 "config": [] 00:40:12.099 } 00:40:12.099 ] 00:40:12.099 }' 00:40:12.099 10:01:11 keyring_file -- keyring/file.sh@115 -- # killprocess 3698242 00:40:12.099 10:01:11 keyring_file -- common/autotest_common.sh@953 -- # '[' -z 3698242 ']' 00:40:12.099 10:01:11 keyring_file -- common/autotest_common.sh@957 -- # kill -0 3698242 00:40:12.099 10:01:11 keyring_file -- common/autotest_common.sh@958 -- # uname 00:40:12.099 10:01:11 keyring_file -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:40:12.099 10:01:11 keyring_file -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3698242 00:40:12.099 10:01:11 keyring_file -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:40:12.099 10:01:11 keyring_file -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:40:12.099 10:01:11 keyring_file -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3698242' 00:40:12.099 killing process with pid 3698242 00:40:12.099 10:01:11 keyring_file -- common/autotest_common.sh@972 -- # kill 3698242 00:40:12.099 Received shutdown signal, test time was about 1.000000 seconds 00:40:12.099 00:40:12.099 Latency(us) 00:40:12.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.099 =================================================================================================================== 00:40:12.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:12.099 10:01:11 keyring_file -- common/autotest_common.sh@977 -- # wait 3698242 00:40:12.361 10:01:11 keyring_file -- keyring/file.sh@118 -- # bperfpid=3700057 00:40:12.361 10:01:11 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3700057 /var/tmp/bperf.sock 00:40:12.361 10:01:11 keyring_file -- common/autotest_common.sh@834 -- # '[' -z 3700057 ']' 00:40:12.361 10:01:11 keyring_file -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:12.361 10:01:11 keyring_file -- common/autotest_common.sh@839 -- # local max_retries=100 00:40:12.361 10:01:11 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:12.361 10:01:11 keyring_file -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:12.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:12.361 10:01:11 keyring_file -- common/autotest_common.sh@843 -- # xtrace_disable 00:40:12.361 10:01:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:12.361 10:01:11 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:12.361 "subsystems": [ 00:40:12.361 { 00:40:12.361 "subsystem": "keyring", 00:40:12.361 "config": [ 00:40:12.361 { 00:40:12.361 "method": "keyring_file_add_key", 00:40:12.361 "params": { 00:40:12.361 "name": "key0", 00:40:12.361 "path": "/tmp/tmp.U3bJ6FWjsL" 00:40:12.361 } 00:40:12.361 }, 00:40:12.361 { 00:40:12.361 "method": "keyring_file_add_key", 00:40:12.361 "params": { 00:40:12.361 "name": "key1", 00:40:12.361 "path": "/tmp/tmp.HFxUh9Waze" 00:40:12.361 } 00:40:12.361 } 00:40:12.361 ] 00:40:12.361 }, 00:40:12.361 { 00:40:12.361 "subsystem": "iobuf", 00:40:12.361 "config": [ 00:40:12.361 { 00:40:12.361 "method": "iobuf_set_options", 00:40:12.361 "params": { 00:40:12.361 "small_pool_count": 8192, 00:40:12.361 "large_pool_count": 1024, 00:40:12.361 "small_bufsize": 8192, 00:40:12.361 "large_bufsize": 135168 00:40:12.361 } 00:40:12.361 } 00:40:12.361 ] 00:40:12.361 }, 00:40:12.361 { 00:40:12.361 "subsystem": "sock", 00:40:12.361 "config": [ 00:40:12.361 { 00:40:12.361 "method": "sock_set_default_impl", 00:40:12.361 "params": { 00:40:12.361 "impl_name": "posix" 00:40:12.361 } 00:40:12.361 }, 00:40:12.361 { 00:40:12.361 "method": "sock_impl_set_options", 00:40:12.361 "params": { 00:40:12.361 "impl_name": "ssl", 00:40:12.361 "recv_buf_size": 4096, 00:40:12.361 "send_buf_size": 4096, 00:40:12.361 "enable_recv_pipe": true, 00:40:12.361 "enable_quickack": false, 00:40:12.361 "enable_placement_id": 0, 00:40:12.361 "enable_zerocopy_send_server": true, 00:40:12.361 "enable_zerocopy_send_client": false, 00:40:12.361 "zerocopy_threshold": 0, 00:40:12.361 "tls_version": 0, 00:40:12.361 "enable_ktls": false 00:40:12.361 } 00:40:12.361 }, 00:40:12.361 { 00:40:12.361 "method": "sock_impl_set_options", 00:40:12.361 "params": { 00:40:12.361 "impl_name": "posix", 00:40:12.361 "recv_buf_size": 2097152, 00:40:12.361 "send_buf_size": 2097152, 00:40:12.361 "enable_recv_pipe": true, 00:40:12.361 "enable_quickack": false, 00:40:12.361 "enable_placement_id": 0, 00:40:12.361 "enable_zerocopy_send_server": true, 00:40:12.361 "enable_zerocopy_send_client": false, 00:40:12.361 "zerocopy_threshold": 0, 00:40:12.361 "tls_version": 0, 00:40:12.361 "enable_ktls": false 00:40:12.361 } 00:40:12.361 } 00:40:12.361 ] 00:40:12.361 }, 00:40:12.361 { 00:40:12.361 "subsystem": "vmd", 00:40:12.361 "config": [] 00:40:12.361 }, 00:40:12.361 { 00:40:12.361 "subsystem": "accel", 00:40:12.361 "config": [ 00:40:12.361 { 00:40:12.361 "method": "accel_set_options", 00:40:12.361 "params": { 00:40:12.361 "small_cache_size": 128, 00:40:12.361 "large_cache_size": 16, 00:40:12.361 "task_count": 2048, 00:40:12.361 "sequence_count": 2048, 00:40:12.361 "buf_count": 2048 00:40:12.361 } 00:40:12.361 } 00:40:12.361 ] 00:40:12.361 }, 00:40:12.361 { 00:40:12.361 "subsystem": "bdev", 00:40:12.361 "config": [ 00:40:12.361 { 00:40:12.361 "method": "bdev_set_options", 00:40:12.361 "params": { 00:40:12.361 "bdev_io_pool_size": 65535, 00:40:12.361 "bdev_io_cache_size": 256, 00:40:12.361 "bdev_auto_examine": true, 00:40:12.361 "iobuf_small_cache_size": 128, 00:40:12.361 "iobuf_large_cache_size": 16 00:40:12.361 } 00:40:12.361 }, 00:40:12.361 { 00:40:12.361 "method": "bdev_raid_set_options", 00:40:12.361 "params": { 00:40:12.361 "process_window_size_kb": 1024, 00:40:12.362 "process_max_bandwidth_mb_sec": 0 00:40:12.362 } 00:40:12.362 }, 00:40:12.362 { 00:40:12.362 "method": "bdev_iscsi_set_options", 00:40:12.362 "params": { 00:40:12.362 "timeout_sec": 30 00:40:12.362 } 00:40:12.362 }, 00:40:12.362 { 00:40:12.362 "method": "bdev_nvme_set_options", 00:40:12.362 "params": { 00:40:12.362 "action_on_timeout": "none", 00:40:12.362 "timeout_us": 0, 00:40:12.362 "timeout_admin_us": 0, 00:40:12.362 "keep_alive_timeout_ms": 10000, 00:40:12.362 "arbitration_burst": 0, 00:40:12.362 "low_priority_weight": 0, 00:40:12.362 "medium_priority_weight": 0, 00:40:12.362 "high_priority_weight": 0, 00:40:12.362 "nvme_adminq_poll_period_us": 10000, 00:40:12.362 "nvme_ioq_poll_period_us": 0, 00:40:12.362 "io_queue_requests": 512, 00:40:12.362 "delay_cmd_submit": true, 00:40:12.362 "transport_retry_count": 4, 00:40:12.362 "bdev_retry_count": 3, 00:40:12.362 "transport_ack_timeout": 0, 00:40:12.362 "ctrlr_loss_timeout_sec": 0, 00:40:12.362 "reconnect_delay_sec": 0, 00:40:12.362 "fast_io_fail_timeout_sec": 0, 00:40:12.362 "disable_auto_failback": false, 00:40:12.362 "generate_uuids": false, 00:40:12.362 "transport_tos": 0, 00:40:12.362 "nvme_error_stat": false, 00:40:12.362 "rdma_srq_size": 0, 00:40:12.362 "io_path_stat": false, 00:40:12.362 "allow_accel_sequence": false, 00:40:12.362 "rdma_max_cq_size": 0, 00:40:12.362 "rdma_cm_event_timeout_ms": 0, 00:40:12.362 "dhchap_digests": [ 00:40:12.362 "sha256", 00:40:12.362 "sha384", 00:40:12.362 "sha512" 00:40:12.362 ], 00:40:12.362 "dhchap_dhgroups": [ 00:40:12.362 "null", 00:40:12.362 "ffdhe2048", 00:40:12.362 "ffdhe3072", 00:40:12.362 "ffdhe4096", 00:40:12.362 "ffdhe6144", 00:40:12.362 "ffdhe8192" 00:40:12.362 ] 00:40:12.362 } 00:40:12.362 }, 00:40:12.362 { 00:40:12.362 "method": "bdev_nvme_attach_controller", 00:40:12.362 "params": { 00:40:12.362 "name": "nvme0", 00:40:12.362 "trtype": "TCP", 00:40:12.362 "adrfam": "IPv4", 00:40:12.362 "traddr": "127.0.0.1", 00:40:12.362 "trsvcid": "4420", 00:40:12.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:12.362 "prchk_reftag": false, 00:40:12.362 "prchk_guard": false, 00:40:12.362 "ctrlr_loss_timeout_sec": 0, 00:40:12.362 "reconnect_delay_sec": 0, 00:40:12.362 "fast_io_fail_timeout_sec": 0, 00:40:12.362 "psk": "key0", 00:40:12.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:12.362 "hdgst": false, 00:40:12.362 "ddgst": false 00:40:12.362 } 00:40:12.362 }, 00:40:12.362 { 00:40:12.362 "method": "bdev_nvme_set_hotplug", 00:40:12.362 "params": { 00:40:12.362 "period_us": 100000, 00:40:12.362 "enable": false 00:40:12.362 } 00:40:12.362 }, 00:40:12.362 { 00:40:12.362 "method": "bdev_wait_for_examine" 00:40:12.362 } 00:40:12.362 ] 00:40:12.362 }, 00:40:12.362 { 00:40:12.362 "subsystem": "nbd", 00:40:12.362 "config": [] 00:40:12.362 } 00:40:12.362 ] 00:40:12.362 }' 00:40:12.362 [2024-10-07 10:01:11.880061] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:40:12.362 [2024-10-07 10:01:11.880115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700057 ] 00:40:12.362 [2024-10-07 10:01:11.956385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:12.362 [2024-10-07 10:01:12.009412] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:12.623 [2024-10-07 10:01:12.152411] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:13.195 10:01:12 keyring_file -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:40:13.195 10:01:12 keyring_file -- common/autotest_common.sh@867 -- # return 0 00:40:13.195 10:01:12 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:13.195 10:01:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.195 10:01:12 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:13.456 10:01:12 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:13.456 10:01:12 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:13.456 10:01:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:13.456 10:01:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:13.456 10:01:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:13.456 10:01:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:13.456 10:01:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.456 10:01:13 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:13.456 10:01:13 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:13.456 10:01:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:13.456 10:01:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:13.456 10:01:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:13.456 10:01:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.456 10:01:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:13.716 10:01:13 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:13.716 10:01:13 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:13.716 10:01:13 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:13.716 10:01:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:13.976 10:01:13 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:13.976 10:01:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:13.976 10:01:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.U3bJ6FWjsL /tmp/tmp.HFxUh9Waze 00:40:13.976 10:01:13 keyring_file -- keyring/file.sh@20 -- # killprocess 3700057 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@953 -- # '[' -z 3700057 ']' 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@957 -- # kill -0 3700057 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@958 -- # uname 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3700057 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3700057' 00:40:13.976 killing process with pid 3700057 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@972 -- # kill 3700057 00:40:13.976 Received shutdown signal, test time was about 1.000000 seconds 00:40:13.976 00:40:13.976 Latency(us) 00:40:13.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:13.976 =================================================================================================================== 00:40:13.976 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@977 -- # wait 3700057 00:40:13.976 10:01:13 keyring_file -- keyring/file.sh@21 -- # killprocess 3698228 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@953 -- # '[' -z 3698228 ']' 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@957 -- # kill -0 3698228 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@958 -- # uname 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:40:13.976 10:01:13 keyring_file -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3698228 00:40:14.238 10:01:13 keyring_file -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:40:14.238 10:01:13 keyring_file -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:40:14.238 10:01:13 keyring_file -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3698228' 00:40:14.238 killing process with pid 3698228 00:40:14.238 10:01:13 keyring_file -- common/autotest_common.sh@972 -- # kill 3698228 00:40:14.238 10:01:13 keyring_file -- common/autotest_common.sh@977 -- # wait 3698228 00:40:14.238 00:40:14.238 real 0m12.076s 00:40:14.238 user 0m29.140s 00:40:14.238 sys 0m2.683s 00:40:14.238 10:01:13 keyring_file -- common/autotest_common.sh@1129 -- # xtrace_disable 00:40:14.238 10:01:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:14.238 ************************************ 00:40:14.238 END TEST keyring_file 00:40:14.238 ************************************ 00:40:14.238 10:01:13 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:40:14.238 10:01:13 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:14.238 10:01:13 -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:40:14.238 10:01:13 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:40:14.500 10:01:13 -- common/autotest_common.sh@10 -- # set +x 00:40:14.500 ************************************ 00:40:14.500 START TEST keyring_linux 00:40:14.500 ************************************ 00:40:14.500 10:01:13 keyring_linux -- common/autotest_common.sh@1128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:14.500 Joined session keyring: 579336605 00:40:14.500 * Looking for test storage... 00:40:14.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:14.500 10:01:14 keyring_linux -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:40:14.500 10:01:14 keyring_linux -- common/autotest_common.sh@1626 -- # lcov --version 00:40:14.500 10:01:14 keyring_linux -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:40:14.500 10:01:14 keyring_linux -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:14.500 10:01:14 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:14.761 10:01:14 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:14.761 10:01:14 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:14.761 10:01:14 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:14.761 10:01:14 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:14.761 10:01:14 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:14.761 10:01:14 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:14.761 10:01:14 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:14.761 10:01:14 keyring_linux -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:14.761 10:01:14 keyring_linux -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:40:14.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.761 --rc genhtml_branch_coverage=1 00:40:14.761 --rc genhtml_function_coverage=1 00:40:14.761 --rc genhtml_legend=1 00:40:14.761 --rc geninfo_all_blocks=1 00:40:14.761 --rc geninfo_unexecuted_blocks=1 00:40:14.761 00:40:14.761 ' 00:40:14.761 10:01:14 keyring_linux -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:40:14.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.761 --rc genhtml_branch_coverage=1 00:40:14.761 --rc genhtml_function_coverage=1 00:40:14.761 --rc genhtml_legend=1 00:40:14.761 --rc geninfo_all_blocks=1 00:40:14.761 --rc geninfo_unexecuted_blocks=1 00:40:14.761 00:40:14.761 ' 00:40:14.761 10:01:14 keyring_linux -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:40:14.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.761 --rc genhtml_branch_coverage=1 00:40:14.761 --rc genhtml_function_coverage=1 00:40:14.761 --rc genhtml_legend=1 00:40:14.761 --rc geninfo_all_blocks=1 00:40:14.761 --rc geninfo_unexecuted_blocks=1 00:40:14.761 00:40:14.761 ' 00:40:14.761 10:01:14 keyring_linux -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:40:14.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.761 --rc genhtml_branch_coverage=1 00:40:14.761 --rc genhtml_function_coverage=1 00:40:14.761 --rc genhtml_legend=1 00:40:14.761 --rc geninfo_all_blocks=1 00:40:14.761 --rc geninfo_unexecuted_blocks=1 00:40:14.761 00:40:14.761 ' 00:40:14.761 10:01:14 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:14.761 10:01:14 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:14.761 10:01:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:14.761 10:01:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:14.761 10:01:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:14.761 10:01:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:14.761 10:01:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:14.761 10:01:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:14.761 10:01:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:14.761 10:01:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:14.761 10:01:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:14.762 10:01:14 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:14.762 10:01:14 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:14.762 10:01:14 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:14.762 10:01:14 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:14.762 10:01:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.762 10:01:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.762 10:01:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.762 10:01:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:14.762 10:01:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:14.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:14.762 10:01:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:14.762 10:01:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:14.762 10:01:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:14.762 10:01:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:14.762 10:01:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:14.762 10:01:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@731 -- # python - 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:14.762 /tmp/:spdk-test:key0 00:40:14.762 10:01:14 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:40:14.762 10:01:14 keyring_linux -- nvmf/common.sh@731 -- # python - 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:14.762 10:01:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:14.762 /tmp/:spdk-test:key1 00:40:14.762 10:01:14 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3700565 00:40:14.762 10:01:14 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3700565 00:40:14.762 10:01:14 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:14.762 10:01:14 keyring_linux -- common/autotest_common.sh@834 -- # '[' -z 3700565 ']' 00:40:14.762 10:01:14 keyring_linux -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:14.762 10:01:14 keyring_linux -- common/autotest_common.sh@839 -- # local max_retries=100 00:40:14.762 10:01:14 keyring_linux -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:14.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:14.762 10:01:14 keyring_linux -- common/autotest_common.sh@843 -- # xtrace_disable 00:40:14.762 10:01:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:14.762 [2024-10-07 10:01:14.345722] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:40:14.762 [2024-10-07 10:01:14.345783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700565 ] 00:40:14.762 [2024-10-07 10:01:14.423014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.023 [2024-10-07 10:01:14.477382] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@867 -- # return 0 00:40:15.593 10:01:15 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@564 -- # xtrace_disable 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:15.593 [2024-10-07 10:01:15.131386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:15.593 null0 00:40:15.593 [2024-10-07 10:01:15.163440] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:15.593 [2024-10-07 10:01:15.163820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:40:15.593 10:01:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:15.593 386267859 00:40:15.593 10:01:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:15.593 752733270 00:40:15.593 10:01:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3700835 00:40:15.593 10:01:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3700835 /var/tmp/bperf.sock 00:40:15.593 10:01:15 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@834 -- # '[' -z 3700835 ']' 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@839 -- # local max_retries=100 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:15.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@843 -- # xtrace_disable 00:40:15.593 10:01:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:15.593 [2024-10-07 10:01:15.240170] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:40:15.593 [2024-10-07 10:01:15.240219] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700835 ] 00:40:15.854 [2024-10-07 10:01:15.315023] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.854 [2024-10-07 10:01:15.368276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.425 10:01:16 keyring_linux -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:40:16.425 10:01:16 keyring_linux -- common/autotest_common.sh@867 -- # return 0 00:40:16.425 10:01:16 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:16.425 10:01:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:16.685 10:01:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:16.685 10:01:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:16.946 10:01:16 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:16.946 10:01:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:16.946 [2024-10-07 10:01:16.552605] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:17.207 nvme0n1 00:40:17.207 10:01:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:17.207 10:01:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:17.207 10:01:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:17.207 10:01:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:17.207 10:01:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:17.207 10:01:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:17.207 10:01:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:17.207 10:01:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:17.207 10:01:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:17.207 10:01:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:17.207 10:01:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:17.207 10:01:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:17.207 10:01:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:17.468 10:01:16 keyring_linux -- keyring/linux.sh@25 -- # sn=386267859 00:40:17.468 10:01:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:17.468 10:01:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:17.468 10:01:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 386267859 == \3\8\6\2\6\7\8\5\9 ]] 00:40:17.468 10:01:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 386267859 00:40:17.468 10:01:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:17.468 10:01:16 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:17.468 Running I/O for 1 seconds... 00:40:18.854 24565.00 IOPS, 95.96 MiB/s 00:40:18.854 Latency(us) 00:40:18.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:18.854 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:18.854 nvme0n1 : 1.01 24565.42 95.96 0.00 0.00 5195.37 4014.08 8901.97 00:40:18.854 =================================================================================================================== 00:40:18.854 Total : 24565.42 95.96 0.00 0.00 5195.37 4014.08 8901.97 00:40:18.854 { 00:40:18.854 "results": [ 00:40:18.854 { 00:40:18.854 "job": "nvme0n1", 00:40:18.854 "core_mask": "0x2", 00:40:18.854 "workload": "randread", 00:40:18.854 "status": "finished", 00:40:18.854 "queue_depth": 128, 00:40:18.854 "io_size": 4096, 00:40:18.854 "runtime": 1.005234, 00:40:18.854 "iops": 24565.42456781207, 00:40:18.854 "mibps": 95.9586897180159, 00:40:18.854 "io_failed": 0, 00:40:18.854 "io_timeout": 0, 00:40:18.854 "avg_latency_us": 5195.374759860694, 00:40:18.854 "min_latency_us": 4014.08, 00:40:18.854 "max_latency_us": 8901.973333333333 00:40:18.854 } 00:40:18.854 ], 00:40:18.854 "core_count": 1 00:40:18.854 } 00:40:18.854 10:01:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:18.854 10:01:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:18.854 10:01:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:18.854 10:01:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:18.854 10:01:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:18.854 10:01:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:18.854 10:01:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:18.854 10:01:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:18.854 10:01:18 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:18.854 10:01:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:18.854 10:01:18 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:18.854 10:01:18 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:18.854 10:01:18 keyring_linux -- common/autotest_common.sh@653 -- # local es=0 00:40:18.854 10:01:18 keyring_linux -- common/autotest_common.sh@655 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:18.854 10:01:18 keyring_linux -- common/autotest_common.sh@641 -- # local arg=bperf_cmd 00:40:18.854 10:01:18 keyring_linux -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:40:18.854 10:01:18 keyring_linux -- common/autotest_common.sh@645 -- # type -t bperf_cmd 00:40:18.854 10:01:18 keyring_linux -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:40:18.854 10:01:18 keyring_linux -- common/autotest_common.sh@656 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:18.854 10:01:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:19.113 [2024-10-07 10:01:18.654484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:19.113 [2024-10-07 10:01:18.655256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4b1e0 (107): Transport endpoint is not connected 00:40:19.113 [2024-10-07 10:01:18.656252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4b1e0 (9): Bad file descriptor 00:40:19.113 [2024-10-07 10:01:18.657253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:19.113 [2024-10-07 10:01:18.657263] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:19.114 [2024-10-07 10:01:18.657269] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:19.114 [2024-10-07 10:01:18.657275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:19.114 request: 00:40:19.114 { 00:40:19.114 "name": "nvme0", 00:40:19.114 "trtype": "tcp", 00:40:19.114 "traddr": "127.0.0.1", 00:40:19.114 "adrfam": "ipv4", 00:40:19.114 "trsvcid": "4420", 00:40:19.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:19.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:19.114 "prchk_reftag": false, 00:40:19.114 "prchk_guard": false, 00:40:19.114 "hdgst": false, 00:40:19.114 "ddgst": false, 00:40:19.114 "psk": ":spdk-test:key1", 00:40:19.114 "allow_unrecognized_csi": false, 00:40:19.114 "method": "bdev_nvme_attach_controller", 00:40:19.114 "req_id": 1 00:40:19.114 } 00:40:19.114 Got JSON-RPC error response 00:40:19.114 response: 00:40:19.114 { 00:40:19.114 "code": -5, 00:40:19.114 "message": "Input/output error" 00:40:19.114 } 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@656 -- # es=1 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@33 -- # sn=386267859 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 386267859 00:40:19.114 1 links removed 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@33 -- # sn=752733270 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 752733270 00:40:19.114 1 links removed 00:40:19.114 10:01:18 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3700835 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@953 -- # '[' -z 3700835 ']' 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@957 -- # kill -0 3700835 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@958 -- # uname 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3700835 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@959 -- # process_name=reactor_1 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@963 -- # '[' reactor_1 = sudo ']' 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3700835' 00:40:19.114 killing process with pid 3700835 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@972 -- # kill 3700835 00:40:19.114 Received shutdown signal, test time was about 1.000000 seconds 00:40:19.114 00:40:19.114 Latency(us) 00:40:19.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:19.114 =================================================================================================================== 00:40:19.114 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:19.114 10:01:18 keyring_linux -- common/autotest_common.sh@977 -- # wait 3700835 00:40:19.375 10:01:18 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3700565 00:40:19.375 10:01:18 keyring_linux -- common/autotest_common.sh@953 -- # '[' -z 3700565 ']' 00:40:19.375 10:01:18 keyring_linux -- common/autotest_common.sh@957 -- # kill -0 3700565 00:40:19.375 10:01:18 keyring_linux -- common/autotest_common.sh@958 -- # uname 00:40:19.375 10:01:18 keyring_linux -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:40:19.375 10:01:18 keyring_linux -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 3700565 00:40:19.375 10:01:18 keyring_linux -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:40:19.375 10:01:18 keyring_linux -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:40:19.375 10:01:18 keyring_linux -- common/autotest_common.sh@971 -- # echo 'killing process with pid 3700565' 00:40:19.375 killing process with pid 3700565 00:40:19.375 10:01:18 keyring_linux -- common/autotest_common.sh@972 -- # kill 3700565 00:40:19.375 10:01:18 keyring_linux -- common/autotest_common.sh@977 -- # wait 3700565 00:40:19.636 00:40:19.636 real 0m5.206s 00:40:19.636 user 0m9.675s 00:40:19.636 sys 0m1.414s 00:40:19.636 10:01:19 keyring_linux -- common/autotest_common.sh@1129 -- # xtrace_disable 00:40:19.636 10:01:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:19.636 ************************************ 00:40:19.636 END TEST keyring_linux 00:40:19.636 ************************************ 00:40:19.636 10:01:19 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:19.636 10:01:19 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:40:19.636 10:01:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:19.636 10:01:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:19.636 10:01:19 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:40:19.636 10:01:19 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:40:19.636 10:01:19 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:40:19.636 10:01:19 -- common/autotest_common.sh@727 -- # xtrace_disable 00:40:19.636 10:01:19 -- common/autotest_common.sh@10 -- # set +x 00:40:19.636 10:01:19 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:40:19.636 10:01:19 -- common/autotest_common.sh@1380 -- # local autotest_es=0 00:40:19.636 10:01:19 -- common/autotest_common.sh@1381 -- # xtrace_disable 00:40:19.636 10:01:19 -- common/autotest_common.sh@10 -- # set +x 00:40:27.777 INFO: APP EXITING 00:40:27.777 INFO: killing all VMs 00:40:27.777 INFO: killing vhost app 00:40:27.777 INFO: EXIT DONE 00:40:31.082 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:65:00.0 (144d a80a): Already using the nvme driver 00:40:31.082 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:40:31.082 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:40:35.285 Cleaning 00:40:35.285 Removing: /var/run/dpdk/spdk0/config 00:40:35.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:35.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:35.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:35.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:35.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:35.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:35.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:35.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:35.285 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:35.285 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:35.285 Removing: /var/run/dpdk/spdk1/config 00:40:35.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:35.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:35.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:35.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:35.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:35.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:35.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:35.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:35.285 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:35.285 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:35.285 Removing: /var/run/dpdk/spdk2/config 00:40:35.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:35.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:35.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:35.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:35.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:35.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:35.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:35.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:35.285 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:35.285 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:35.285 Removing: /var/run/dpdk/spdk3/config 00:40:35.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:35.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:35.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:35.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:35.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:35.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:35.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:35.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:35.285 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:35.285 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:35.285 Removing: /var/run/dpdk/spdk4/config 00:40:35.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:35.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:35.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:35.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:35.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:35.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:35.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:35.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:35.285 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:35.285 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:35.285 Removing: /dev/shm/bdev_svc_trace.1 00:40:35.285 Removing: /dev/shm/nvmf_trace.0 00:40:35.285 Removing: /dev/shm/spdk_tgt_trace.pid3118229 00:40:35.285 Removing: /var/run/dpdk/spdk0 00:40:35.285 Removing: /var/run/dpdk/spdk1 00:40:35.285 Removing: /var/run/dpdk/spdk2 00:40:35.285 Removing: /var/run/dpdk/spdk3 00:40:35.285 Removing: /var/run/dpdk/spdk4 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3116724 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3118229 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3119090 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3120126 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3120464 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3121536 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3121736 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3122091 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3123264 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3124189 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3124862 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3125308 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3125730 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3125855 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3126164 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3126519 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3126916 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3127982 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3131584 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3131951 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3132326 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3132589 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3133037 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3133152 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3133743 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3133760 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3134121 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3134423 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3134498 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3134828 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3135284 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3135632 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3136044 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3140758 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3146117 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3158319 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3159134 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3164446 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3164841 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3170267 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3178010 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3181349 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3194082 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3205272 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3207383 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3208632 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3230131 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3235505 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3293282 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3299748 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3306995 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3314515 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3314590 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3315605 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3316618 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3317625 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3318289 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3318310 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3318628 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3318660 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3318662 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3319668 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3320668 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3321697 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3322345 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3322387 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3322688 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3324132 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3325535 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3336023 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3370138 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3376405 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3378336 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3380516 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3380857 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3381198 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3381479 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3382256 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3384283 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3385402 00:40:35.285 Removing: /var/run/dpdk/spdk_pid3386071 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3388775 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3389489 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3390369 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3395526 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3402325 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3402327 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3402329 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3407195 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3417595 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3422917 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3430591 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3432088 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3433893 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3435457 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3441301 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3446415 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3455911 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3455974 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3461104 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3461437 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3461775 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3462117 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3462225 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3467890 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3468410 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3473969 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3477555 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3484464 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3491251 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3501518 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3510185 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3510226 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3534260 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3535016 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3535711 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3536404 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3537450 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3538253 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3539106 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3539826 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3544947 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3545285 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3552717 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3552939 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3559639 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3564740 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3576418 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3577187 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3582935 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3583344 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3588550 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3595411 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3598439 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3611074 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3621897 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3623903 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3624911 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3645493 00:40:35.548 Removing: /var/run/dpdk/spdk_pid3650385 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3653566 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3661289 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3661390 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3667393 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3669658 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3672100 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3673358 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3675816 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3677227 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3687978 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3688640 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3689237 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3692151 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3692623 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3693291 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3698228 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3698242 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3700057 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3700565 00:40:35.809 Removing: /var/run/dpdk/spdk_pid3700835 00:40:35.809 Clean 00:40:35.809 10:01:35 -- common/autotest_common.sh@1439 -- # return 0 00:40:35.809 10:01:35 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:40:35.809 10:01:35 -- common/autotest_common.sh@733 -- # xtrace_disable 00:40:35.810 10:01:35 -- common/autotest_common.sh@10 -- # set +x 00:40:35.810 10:01:35 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:40:35.810 10:01:35 -- common/autotest_common.sh@733 -- # xtrace_disable 00:40:35.810 10:01:35 -- common/autotest_common.sh@10 -- # set +x 00:40:36.071 10:01:35 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:36.071 10:01:35 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:36.071 10:01:35 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:36.071 10:01:35 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:40:36.071 10:01:35 -- spdk/autotest.sh@394 -- # hostname 00:40:36.071 10:01:35 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:36.071 geninfo: WARNING: invalid characters removed from testname! 00:41:02.658 10:02:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:04.047 10:02:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:05.967 10:02:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:07.354 10:02:06 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:09.271 10:02:08 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:10.657 10:02:10 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:12.571 10:02:11 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:12.571 10:02:11 -- common/autotest_common.sh@1625 -- $ [[ y == y ]] 00:41:12.571 10:02:11 -- common/autotest_common.sh@1626 -- $ lcov --version 00:41:12.571 10:02:11 -- common/autotest_common.sh@1626 -- $ awk '{print $NF}' 00:41:12.571 10:02:12 -- common/autotest_common.sh@1626 -- $ lt 1.15 2 00:41:12.571 10:02:12 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:41:12.571 10:02:12 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:41:12.571 10:02:12 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:41:12.571 10:02:12 -- scripts/common.sh@336 -- $ IFS=.-: 00:41:12.571 10:02:12 -- scripts/common.sh@336 -- $ read -ra ver1 00:41:12.571 10:02:12 -- scripts/common.sh@337 -- $ IFS=.-: 00:41:12.571 10:02:12 -- scripts/common.sh@337 -- $ read -ra ver2 00:41:12.571 10:02:12 -- scripts/common.sh@338 -- $ local 'op=<' 00:41:12.571 10:02:12 -- scripts/common.sh@340 -- $ ver1_l=2 00:41:12.571 10:02:12 -- scripts/common.sh@341 -- $ ver2_l=1 00:41:12.571 10:02:12 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:41:12.571 10:02:12 -- scripts/common.sh@344 -- $ case "$op" in 00:41:12.571 10:02:12 -- scripts/common.sh@345 -- $ : 1 00:41:12.571 10:02:12 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:41:12.571 10:02:12 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:12.571 10:02:12 -- scripts/common.sh@365 -- $ decimal 1 00:41:12.571 10:02:12 -- scripts/common.sh@353 -- $ local d=1 00:41:12.571 10:02:12 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:41:12.571 10:02:12 -- scripts/common.sh@355 -- $ echo 1 00:41:12.571 10:02:12 -- scripts/common.sh@365 -- $ ver1[v]=1 00:41:12.571 10:02:12 -- scripts/common.sh@366 -- $ decimal 2 00:41:12.571 10:02:12 -- scripts/common.sh@353 -- $ local d=2 00:41:12.571 10:02:12 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:41:12.571 10:02:12 -- scripts/common.sh@355 -- $ echo 2 00:41:12.571 10:02:12 -- scripts/common.sh@366 -- $ ver2[v]=2 00:41:12.571 10:02:12 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:41:12.571 10:02:12 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:41:12.571 10:02:12 -- scripts/common.sh@368 -- $ return 0 00:41:12.571 10:02:12 -- common/autotest_common.sh@1627 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:12.571 10:02:12 -- common/autotest_common.sh@1639 -- $ export 'LCOV_OPTS= 00:41:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.571 --rc genhtml_branch_coverage=1 00:41:12.571 --rc genhtml_function_coverage=1 00:41:12.571 --rc genhtml_legend=1 00:41:12.571 --rc geninfo_all_blocks=1 00:41:12.571 --rc geninfo_unexecuted_blocks=1 00:41:12.571 00:41:12.571 ' 00:41:12.571 10:02:12 -- common/autotest_common.sh@1639 -- $ LCOV_OPTS=' 00:41:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.571 --rc genhtml_branch_coverage=1 00:41:12.571 --rc genhtml_function_coverage=1 00:41:12.571 --rc genhtml_legend=1 00:41:12.571 --rc geninfo_all_blocks=1 00:41:12.571 --rc geninfo_unexecuted_blocks=1 00:41:12.571 00:41:12.571 ' 00:41:12.571 10:02:12 -- common/autotest_common.sh@1640 -- $ export 'LCOV=lcov 00:41:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.571 --rc genhtml_branch_coverage=1 00:41:12.571 --rc genhtml_function_coverage=1 00:41:12.571 --rc genhtml_legend=1 00:41:12.571 --rc geninfo_all_blocks=1 00:41:12.571 --rc geninfo_unexecuted_blocks=1 00:41:12.571 00:41:12.571 ' 00:41:12.571 10:02:12 -- common/autotest_common.sh@1640 -- $ LCOV='lcov 00:41:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.571 --rc genhtml_branch_coverage=1 00:41:12.571 --rc genhtml_function_coverage=1 00:41:12.571 --rc genhtml_legend=1 00:41:12.571 --rc geninfo_all_blocks=1 00:41:12.571 --rc geninfo_unexecuted_blocks=1 00:41:12.571 00:41:12.571 ' 00:41:12.571 10:02:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:12.571 10:02:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:41:12.571 10:02:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:41:12.571 10:02:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:12.571 10:02:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:12.571 10:02:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.571 10:02:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.572 10:02:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.572 10:02:12 -- paths/export.sh@5 -- $ export PATH 00:41:12.572 10:02:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.572 10:02:12 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:41:12.572 10:02:12 -- common/autobuild_common.sh@486 -- $ date +%s 00:41:12.572 10:02:12 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728288132.XXXXXX 00:41:12.572 10:02:12 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728288132.ndNz0I 00:41:12.572 10:02:12 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:41:12.572 10:02:12 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:41:12.572 10:02:12 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:41:12.572 10:02:12 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:41:12.572 10:02:12 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:41:12.572 10:02:12 -- common/autobuild_common.sh@502 -- $ get_config_params 00:41:12.572 10:02:12 -- common/autotest_common.sh@410 -- $ xtrace_disable 00:41:12.572 10:02:12 -- common/autotest_common.sh@10 -- $ set +x 00:41:12.572 10:02:12 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:41:12.572 10:02:12 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:41:12.572 10:02:12 -- pm/common@17 -- $ local monitor 00:41:12.572 10:02:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:12.572 10:02:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:12.572 10:02:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:12.572 10:02:12 -- pm/common@21 -- $ date +%s 00:41:12.572 10:02:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:12.572 10:02:12 -- pm/common@25 -- $ sleep 1 00:41:12.572 10:02:12 -- pm/common@21 -- $ date +%s 00:41:12.572 10:02:12 -- pm/common@21 -- $ date +%s 00:41:12.572 10:02:12 -- pm/common@21 -- $ date +%s 00:41:12.572 10:02:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728288132 00:41:12.572 10:02:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728288132 00:41:12.572 10:02:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728288132 00:41:12.572 10:02:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728288132 00:41:12.572 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728288132_collect-cpu-load.pm.log 00:41:12.572 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728288132_collect-vmstat.pm.log 00:41:12.572 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728288132_collect-cpu-temp.pm.log 00:41:12.572 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728288132_collect-bmc-pm.bmc.pm.log 00:41:13.516 10:02:13 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:41:13.516 10:02:13 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:41:13.516 10:02:13 -- spdk/autopackage.sh@14 -- $ timing_finish 00:41:13.516 10:02:13 -- common/autotest_common.sh@739 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:13.516 10:02:13 -- common/autotest_common.sh@740 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:41:13.516 10:02:13 -- common/autotest_common.sh@743 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:13.516 10:02:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:41:13.516 10:02:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:41:13.516 10:02:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:41:13.516 10:02:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:13.516 10:02:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:41:13.516 10:02:13 -- pm/common@44 -- $ pid=3713927 00:41:13.516 10:02:13 -- pm/common@50 -- $ kill -TERM 3713927 00:41:13.516 10:02:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:13.516 10:02:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:41:13.516 10:02:13 -- pm/common@44 -- $ pid=3713929 00:41:13.516 10:02:13 -- pm/common@50 -- $ kill -TERM 3713929 00:41:13.516 10:02:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:13.516 10:02:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:41:13.516 10:02:13 -- pm/common@44 -- $ pid=3713930 00:41:13.516 10:02:13 -- pm/common@50 -- $ kill -TERM 3713930 00:41:13.516 10:02:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:13.516 10:02:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:41:13.516 10:02:13 -- pm/common@44 -- $ pid=3713954 00:41:13.516 10:02:13 -- pm/common@50 -- $ sudo -E kill -TERM 3713954 00:41:13.516 + [[ -n 3035576 ]] 00:41:13.516 + sudo kill 3035576 00:41:13.787 Pausing (Preparing for shutdown) 01:03:07.814 Resuming build at Mon Oct 07 08:24:07 UTC 2024 after Jenkins restart 01:03:18.119 Waiting for reconnection of CYP11 before proceeding with build 01:03:18.220 Timeout expired 6.5 sec ago 01:03:18.220 Cancelling nested steps due to timeout 01:03:18.228 Ready to run at Mon Oct 07 08:24:17 UTC 2024 01:03:18.233 [Pipeline] } 01:03:18.264 [Pipeline] // stage 01:03:18.274 [Pipeline] } 01:03:18.289 [Pipeline] // timeout 01:03:18.296 [Pipeline] } 01:03:18.301 Timeout has been exceeded 01:03:18.301 org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 6341dad6-1282-4604-a017-942af07df230 01:03:18.301 Setting overall build result to ABORTED 01:03:18.320 [Pipeline] // catchError 01:03:18.325 [Pipeline] } 01:03:18.348 [Pipeline] // wrap 01:03:18.352 [Pipeline] } 01:03:18.359 [Pipeline] // catchError 01:03:18.379 [Pipeline] stage 01:03:18.381 [Pipeline] { (Epilogue) 01:03:18.390 [Pipeline] catchError 01:03:18.392 [Pipeline] { 01:03:18.402 [Pipeline] echo 01:03:18.403 Cleanup processes 01:03:18.409 [Pipeline] sh 01:03:19.303 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:03:19.303 3719421 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:03:19.324 [Pipeline] sh 01:03:19.661 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:03:19.661 ++ grep -v 'sudo pgrep' 01:03:19.661 ++ awk '{print $1}' 01:03:19.661 + sudo kill -9 01:03:19.661 + true 01:03:19.676 [Pipeline] sh 01:03:19.975 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:03:32.274 [Pipeline] sh 01:03:32.571 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:03:32.571 Artifacts sizes are good 01:03:32.588 [Pipeline] archiveArtifacts 01:03:32.595 Archiving artifacts 01:03:33.220 [Pipeline] sh 01:03:33.521 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 01:03:33.538 [Pipeline] cleanWs 01:03:33.548 [WS-CLEANUP] Deleting project workspace... 01:03:33.548 [WS-CLEANUP] Deferred wipeout is used... 01:03:33.562 [WS-CLEANUP] done 01:03:33.564 [Pipeline] } 01:03:33.578 [Pipeline] // catchError 01:03:33.586 [Pipeline] echo 01:03:33.588 Tests finished with errors. Please check the logs for more info. 01:03:33.591 [Pipeline] echo 01:03:33.592 Execution node will be rebooted. 01:03:33.605 [Pipeline] build 01:03:33.608 Scheduling project: reset-job 01:03:33.620 [Pipeline] sh 01:03:33.915 + logger -p user.info -t JENKINS-CI 01:03:33.927 [Pipeline] } 01:03:33.939 [Pipeline] // stage 01:03:33.944 [Pipeline] } 01:03:33.958 [Pipeline] // node 01:03:33.963 [Pipeline] End of Pipeline 01:03:33.987 Finished: ABORTED